From stirabos at redhat.com Thu Feb 1 08:12:16 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 1 Feb 2018 09:12:16 +0100 Subject: [ovirt-users] After realizing HA migration, the virtual machine can still get the virtual machine information by using the "vdsm-client host getVMList" instruction on the host before the migration. In-Reply-To: <42d98cc1.163e.1614ef43bd6.Coremail.pym0914@163.com> References: <22ca0fd1.aee0.1614c0a03df.Coremail.pym0914@163.com> <42d98cc1.163e.1614ef43bd6.Coremail.pym0914@163.com> Message-ID: On Thu, Feb 1, 2018 at 2:21 AM, Pym wrote: > > I checked the vm1, he is keep up state, and can be used, but on host1 has > after shutdown is a suspended vm1, this cannot be used, this is the problem > now. > > In host1, you can get the information of vm1 using the "vdsm-client Host > getVMList", but you can't get the vm1 information using the "virsh list". > > Maybe a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1505399 Arik? > > > > ? 2018-02-01 07:16:37?"Simone Tiraboschi" ??? > > > > On Wed, Jan 31, 2018 at 12:46 PM, Pym wrote: > >> Hi: >> >> The current environment is as follows: >> >> Ovirt-engine version 4.2.0 is the source code compilation and >> installation. Add two hosts, host1 and host2, respectively. At host1, a >> virtual machine is created on vm1, and a vm2 is created on host2 and HA is >> configured. >> >> Operation steps: >> >> Use the shutdown -r command on host1. Vm1 successfully migrated to host2. >> When host1 is restarted, the following situation occurs: >> >> The state of the vm2 will be shown in two images, switching from up and >> pause. >> >> When I perform the "vdsm-client Host getVMList" in host1, I will get the >> information of vm1. When I execute the "vdsm-client Host getVMList" in >> host2, I will get the information of vm1 and vm2. >> When I do "virsh list" in host1, there is no virtual machine information. >> When I execute "virsh list" at host2, I will get information of vm1 and vm2. >> >> How to solve this problem? >> >> Is it the case that vm1 did not remove the information on host1 during >> the migration, or any other reason? >> > > Did you also check if your vms always remained up? > In 4.2 we have libvirt-guests service on the hosts which tries to properly > shutdown the running VMs on host shutdown. > > >> >> Thank you. >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34515 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 35900 bytes Desc: not available URL: From ddqlo at 126.com Thu Feb 1 08:13:14 2018 From: ddqlo at 126.com (=?GBK?B?tq3H4MH6?=) Date: Thu, 1 Feb 2018 16:13:14 +0800 (CST) Subject: [ovirt-users] active directory and sso Message-ID: <1c5cffbd.6cd0.161506d46dc.Coremail.ddqlo@126.com> Hi, all I am trying to make SSO working with windows7 vm in an ovirt 4.1 environment. Ovirt-guest-agent has been installed in windows7 vm. I have an active directory server of windows2012 and I have configured the engine using "ovirt-engine-extension-aaa-ldap-setup" successfully. The windows7 vm has joined the domain,too. But when I login the userportal using a user created in the AD server, I still have to login the windows7 vm using the same user for the second time. It seems that SSO does not work. Anyone can help me? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Feb 1 08:16:47 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 1 Feb 2018 10:16:47 +0200 Subject: [ovirt-users] New oVirt blog post - oVirt 4.2.2 web admin UI browser bookmarks Message-ID: oVirt web admin UI now allows the user to bookmark all entities and searches using their browser. Full blog post @ https://ovirt.org/blog/2018/01/ovirt-admin-bookmarks/ Y. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperina at redhat.com Thu Feb 1 08:21:47 2018 From: mperina at redhat.com (Martin Perina) Date: Thu, 1 Feb 2018 09:21:47 +0100 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > Hi, > > From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. Try > using the standard ipmi. > ?It's not just an alias, ilo3/ilo4 also have different defaults than ipmilan. For example if you use ilo4, then by default following is used: ? ?lanplus=1 power_wait=4 ?So I recommend to start with ilo4 and add any necessary custom options into Options field. If you need some custom options, could you please share them with us? It would be very helpful for us, if needed we could introduce ilo5 with different defaults then ilo4 Thanks Martin > Luca > > > > Il 31 gen 2018 11:14 PM, "Terry hey" ha scritto: > >> Dear all, >> Did oVirt 4.2 Power management support iLO5 as i could not see iLO5 >> option in Power Management. >> >> Regards >> Terry >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperina at redhat.com Thu Feb 1 08:35:57 2018 From: mperina at redhat.com (Martin Perina) Date: Thu, 1 Feb 2018 09:35:57 +0100 Subject: [ovirt-users] active directory and sso In-Reply-To: <1c5cffbd.6cd0.161506d46dc.Coremail.ddqlo@126.com> References: <1c5cffbd.6cd0.161506d46dc.Coremail.ddqlo@126.com> Message-ID: On Thu, Feb 1, 2018 at 9:13 AM, ??? wrote: > Hi, all > I am trying to make SSO working with windows7 vm in an ovirt 4.1 > environment. Ovirt-guest-agent has been installed in windows7 vm. I have an > active directory server of windows2012 and I have configured the engine > using "ovirt-engine-extension-aaa-ldap-setup" successfully. The windows7 > vm has joined the domain,too. But when I login the userportal using a user > created in the AD server, I still have to login the windows7 vm using the > same user for the second time. It seems that SSO does not work. > Anyone can help me? Thanks! > ?We are not providing full SSO for ? ?VMs? ?. At the moment you have 2 options: 1. If you want user to be automatically logged in into a VM, then you need to setup SSO using aaa-ldap extension for AD (please don't forget to answer Yes for question about SSO for VMs in setup tool). Andf of course in a VM you need to have installed and enabled guest agent. Once user logs into VM Portal? and clicks on a VM, then he should be automatically logged into it. 2. If you setup kerberos for engine SSO, then you don't need to enter password to loging into VM Portal, but in such case we cannot pass a password into a VM and user are not automatically logged in. Martin > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma at mail.fr Thu Feb 1 08:06:00 2018 From: spfma at mail.fr (spfma at mail.fr) Date: Thu, 01 Feb 2018 09:06:00 +0100 Subject: [ovirt-users] No available Host to migrate to Message-ID: <20180201080600.D6321806A6@smtp03.mail.de> Hi, What are the reasons that can cause this message to appear, in a cluster where most machines are able to migrate without problem ? I have this problem for the Engine VM and managed to solved it for another one which configuration prevented any kind of migration. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Thu Feb 1 09:17:39 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 1 Feb 2018 10:17:39 +0100 Subject: [ovirt-users] ovirt 4.2.1 and uploading package profile problem Message-ID: Hello, I'm testing a 4.2.1 environment that uses a proxy for yum. I see that after enabling 4.2.1pre repos on my future host and instaling ovirt packages, now every yum command gets this message in output: Uploading Package Profile and after a couple of minutes wait Unable to upload Package Profile Searching through internet I found posts related to katello... Indeed the ovirt install produced installation of katello-agent and katello-agent-fact-plugin rpms (version 2.9.0.1-1) How can I solve this? Do I need to put the proxy anywhere or do I have to disable any not necessary service? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Thu Feb 1 09:17:49 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 1 Feb 2018 10:17:49 +0100 Subject: [ovirt-users] No available Host to migrate to In-Reply-To: <20180201080600.D6321806A6@smtp03.mail.de> References: <20180201080600.D6321806A6@smtp03.mail.de> Message-ID: On Thu, Feb 1, 2018 at 9:06 AM, wrote: > > Hi, > What are the reasons that can cause this message to appear, in a cluster > where most machines are able to migrate without problem ? > I have this problem for the Engine VM and managed to solved it for another > one which configuration prevented any kind of migration. > Regards > The engine VM could be migrated just to hosted-engine configured hosts. Do you have another hosted-engine host up and with the required resources to accommodate the engine VM? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Thu Feb 1 09:21:31 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 1 Feb 2018 10:21:31 +0100 Subject: [ovirt-users] engine add hosts In-Reply-To: <1517065542883.6567@lingtong.com> References: <1517065542883.6567@lingtong.com> Message-ID: Hi, host deploy logs and engine.log would be needed Thanks, michal > On 27 Jan 2018, at 16:05, ??? wrote: > > hello~?I want to add hosts ?but my host offline , can not connect to internal .engine add hosts , the message? installing hosts node failed . My engine was setup successful , i use engine-setup --offline OK?please help~ > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Thu Feb 1 09:21:12 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 1 Feb 2018 11:21:12 +0200 Subject: [ovirt-users] Upgrade via reinstall? In-Reply-To: <2A4EF9EB-209B-4248-952B-879F60EC9A55@squaretrade.com> References: <2A4EF9EB-209B-4248-952B-879F60EC9A55@squaretrade.com> Message-ID: On Thu, Feb 1, 2018 at 1:38 AM, Jamie Lawrence wrote: > Hello, > > I currently have an Ovirt 4.1.8 installation with a hosted engine using > Gluster for storage, with the DBs hosted on a dedicated PG cluster. > > For reasons[1], it seems possibly simpler for me to upgrade our > installation by reinstalling rather than upgrading. In this case, I can > happily bring down the running VMs/otherwise do things that one normally > can't. > > Is there any technical reason I can't/shouldn't rebuild from bare-metal, > including creating a fresh hosted engine, without losing anything? I > suppose a different way of asking this is, is there anything on the > engine/host filesystems that I should preserve/restore for this to work? > The engine-backup utility is your friend and will properly back up for you everything you need. Y. > > Thanks, > > -j > > [1] If this isn't an option, I'll go in to them in order to figure out a > plan B; just avoiding a lot of backstory that isn't needed for the question. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Thu Feb 1 09:28:00 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 1 Feb 2018 10:28:00 +0100 Subject: [ovirt-users] Using upstream QEMU In-Reply-To: References: Message-ID: > On 31 Jan 2018, at 16:53, Yedidyah Bar David wrote: > > On Wed, Jan 31, 2018 at 5:43 PM, Harry Mallon wrote: >> Hello all, >> >> Has anyone used oVirt with non-oVirt provided QEMU versions? >> I need a feature provided by upstream QEMU, but it is disabled in the oVirt/CentOS7 QEMU RPM. just curious - which one? Typically the reason for disabling it is that it?s not really stable >> >> I have two possible methods to avoid the issue: >> 1. Fedora has a more recent QEMU which is closer to 'stock'. I see that oVirt 4.2 has no Fedora support, > > Indeed, mostly > >> but is it possible to install the host onto a Fedora machine? > > Didn't try this recently, but it might require not-too-much work with > fc25 or so. > IIRC fc27 is python3-only, and this will require more work (which is > ongoing, but > don't hold your breath). > >> I am trying to use the master branch rpms as recommended in the "No Fedora Support" note with no luck currently. > > Another option is to try to rebuild the fedora srpm for CentOS 7. > >> 2. Is it safe/sensible to use oVirt with a CentOS7 host running an upstream QEMU version? > > No idea. If it's only for development/testing, I'd say give it a try. it?s as sensible as running any bleeding edge stuff. It should work if you manage to resolve qemu deps you could also rebuild qemu-kvm-ev without the patch which blacklists the feature you want to see. Thanks, michal > >> >> Thanks, >> Harry >> >> >> Harry Mallon >> CODEX | Senior Software Engineer >> 60 Poland Street | London | England | W1F 7NT >> E harry.mallon at codex.online | T +44 203 7000 989 >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > -- > Didi > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > From gianluca.cecchi at gmail.com Thu Feb 1 10:31:17 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 1 Feb 2018 11:31:17 +0100 Subject: [ovirt-users] ovirt 4.2.1 pre hosted engine deploy failure In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 11:48 AM, Simone Tiraboschi wrote: > > > Ciao Gianluca, > we have an issue logging messages with special unicode chars from ansible, > it's tracked here: > https://bugzilla.redhat.com/show_bug.cgi?id=1533500 > but this is just hiding your real issue. > > I'm almost sure that you are facing an issue writing on NFS and thwn dd > returns us an error message with \u2018 and \u2019. > Can you please check your NFS permissions? > > Ciao Simone, thanks for answering. I think you were right. Previously I had this: /nfs/SHE_DOMAIN *(rw) Now I have changed to: /nfs/SHE_DOMAIN *(rw,anonuid=36,anongid=36,all_squash) I restarted the deploy with the answer file # hosted-engine --deploy --config-append=/var/lib/ovirt-hosted-engine-setup/answers/answers-20180129164431.conf and it went ahead... and I have contents inside the directory: # ll /nfs/SHE_DOMAIN/a0351a82-734d-4d9a-a75e-3313d2ffe23a/ total 12 drwxr-xr-x. 2 vdsm kvm 4096 Jan 29 16:40 dom_md drwxr-xr-x. 6 vdsm kvm 4096 Jan 29 16:43 images drwxr-xr-x. 4 vdsm kvm 4096 Jan 29 16:40 master But it ended with a problem regarding engine vm: [ INFO ] TASK [Wait for engine to start] [ INFO ] ok: [localhost] [ INFO ] TASK [Set engine pub key as authorized key without validating the TLS/SSL certificates] [ INFO ] changed: [localhost] [ INFO ] TASK [Force host-deploy in offline mode] [ INFO ] changed: [localhost] [ INFO ] TASK [include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [Obtain SSO token using username/password credentials] [ INFO ] ok: [localhost] [ INFO ] TASK [Add host] [ INFO ] changed: [localhost] [ INFO ] TASK [Wait for the host to become non operational] [ INFO ] ok: [localhost] [ INFO ] TASK [Get virbr0 routing configuration] [ INFO ] changed: [localhost] [ INFO ] TASK [Get ovirtmgmt route table id] [ INFO ] changed: [localhost] [ INFO ] TASK [Check network configuration] [ INFO ] changed: [localhost] [ INFO ] TASK [Clean network configuration] [ INFO ] changed: [localhost] [ INFO ] TASK [Restore network configuration] [ INFO ] changed: [localhost] [ INFO ] TASK [Wait for the host to be up] [ ERROR ] Error: Failed to read response. [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": false, "msg": "Failed to read response."} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources [ INFO ] TASK [Gathering Facts] [ INFO ] ok: [localhost] [ INFO ] TASK [Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180201104600.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: this system is not reliable, please check the issue,fix and redeploy Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180201102603-1of5a1.log Under /var/log/libvirt/qemu of host from where I'm running the hosted-engine deploy I see this 2018-02-01 09:29:05.515+0000: starting up libvirt version: 3.2.0, package: 14.el7_4.7 (CentOS BuildSystem , 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version: 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname: ov42.mydomain LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name guest=HostedEngineLocal,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -cpu Westmere,+kvmclock -m 6184 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 8c8f8163-5b69-4ff5-b67c-07b1a9b8f100 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-1-HostedEngineLocal/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/var/tmp/localvm1ClXud/images/918bbfc1-d599-4170-9a92-1ac417bf7658/bb8b3078-fddb-4ce3-8da0-0a191768a357,format=qcow2,if=none,id=drive-virtio-disk0 -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/tmp/localvm1ClXud/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:15:7b:27,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-1-HostedEngineLocal/org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object rng-random,id=objrng0,filename=/dev/random -device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on 2018-02-01T09:29:05.771459Z qemu-kvm: -chardev pty,id=charserial0: char device redirected to /dev/pts/3 (label charserial0) 2018-02-01T09:34:19.445774Z qemu-kvm: terminating on signal 15 from pid 6052 (/usr/sbin/libvirtd) 2018-02-01 09:34:19.668+0000: shutting down, reason=shutdown In /var/log/messages: Feb 1 10:29:05 ov42 systemd-machined: New machine qemu-1-HostedEngineLocal. Feb 1 10:29:05 ov42 systemd: Started Virtual Machine qemu-1-HostedEngineLocal. Feb 1 10:29:05 ov42 systemd: Starting Virtual Machine qemu-1-HostedEngineLocal. Feb 1 10:29:05 ov42 kvm: 1 guest now active Feb 1 10:29:06 ov42 python: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 | awk '{ print $5 }' | cut -f1 -d'/' removes=None creates=None chdir=None stdin=None Feb 1 10:29:07 ov42 kernel: virbr0: port 2(vnet0) entered learning state Feb 1 10:29:09 ov42 kernel: virbr0: port 2(vnet0) entered forwarding state Feb 1 10:29:09 ov42 kernel: virbr0: topology change detected, propagating Feb 1 10:29:09 ov42 NetworkManager[749]: [1517477349.5180] device (virbr0): link connected Feb 1 10:29:16 ov42 python: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 | awk '{ print $5 }' | cut -f1 -d'/' removes=None creates=None chdir=None stdin=None Feb 1 10:29:27 ov42 python: ansible-command Invoked with warn=True executable=None _uses_shell=True _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 | awk '{ print $5 }' | cut -f1 -d'/' removes=None creates=None chdir=None stdin=None Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPDISCOVER(virbr0) 00:16:3e:15:7b:27 Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPOFFER(virbr0) 192.168.122.200 00:16:3e:15:7b:27 Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPREQUEST(virbr0) 192.168.122.200 00:16:3e:15:7b:27 Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPACK(virbr0) 192.168.122.200 00:16:3e:15:7b:27 . . . Feb 1 10:34:00 ov42 systemd: Starting Virtualization daemon... Feb 1 10:34:00 ov42 python: ansible-ovirt_hosts_facts Invoked with pattern=name=ov42.mydomain status=up fetch_nested=False nested_attributes=[] auth={'ca_file': None, 'url': ' https://ov42she.mydomain/ovirt-engine/api', 'insecure': True, 'kerberos': False, 'compress': True, 'headers': None, 'token': 'GOK2wLFZ0PIs1GbXVQjNW-yBlUtZoGRa2I92NkCkm6lwdlQV-dUdP5EjInyGGN_zEVEHFKgR6nuZ-eIlfaM_lw', 'timeout': 0} Feb 1 10:34:03 ov42 systemd: Started Virtualization daemon. Feb 1 10:34:03 ov42 systemd: Reloading. Feb 1 10:34:03 ov42 systemd: [/usr/lib/systemd/system/ip6tables.service:3] Failed to add dependency on syslog.target,iptables.service, ignoring: Invalid argument Feb 1 10:34:03 ov42 systemd: Cannot add dependency job for unit lvm2-lvmetad.socket, ignoring: Unit is masked. Feb 1 10:34:03 ov42 systemd: Starting Cockpit Web Service... Feb 1 10:34:03 ov42 dnsmasq[6322]: read /etc/hosts - 4 addresses Feb 1 10:34:03 ov42 dnsmasq[6322]: read /var/lib/libvirt/dnsmasq/default.addnhosts - 0 addresses Feb 1 10:34:03 ov42 dnsmasq-dhcp[6322]: read /var/lib/libvirt/dnsmasq/default.hostsfile Feb 1 10:34:03 ov42 systemd: Started Cockpit Web Service. Feb 1 10:34:03 ov42 cockpit-ws: Using certificate: /etc/cockpit/ws-certs.d/0-self-signed.cert Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: info : libvirt version: 3.2.0, package: 14.el7_4.7 (CentOS BuildSystem < http://bugs.centos.org>, 2018-01-04-19:31:34, c1bm.rdu2.centos.org) Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: info : hostname: ov42.mydomain Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: error : virDirOpenInternal:2829 : cannot open directory '/var/tmp/localvm7I0SSJ/images/918bbfc1-d599-4170-9a92-1ac417bf7658': No such file or directory Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : storageDriverAutostart:204 : internal error: Failed to autostart storage pool '918bbfc1-d599-4170-9a92-1ac417bf7658': cannot open directory '/var/tmp/localvm7I0SSJ/images/918bbfc1-d599-4170-9a92-1ac417bf7658': No such file or directory Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : virDirOpenInternal:2829 : cannot open directory '/var/tmp/localvm7I0SSJ': No such file or directory Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : storageDriverAutostart:204 : internal error: Failed to autostart storage pool 'localvm7I0SSJ': cannot open directory '/var/tmp/localvm7I0SSJ': No such file or directory Feb 1 10:34:03 ov42 systemd: Stopping Suspend/Resume Running libvirt Guests... Feb 1 10:34:04 ov42 libvirt-guests.sh: Running guests on qemu+tls://ov42.mydomain/system URI: HostedEngineLocal Feb 1 10:34:04 ov42 libvirt-guests.sh: Shutting down guests on qemu+tls://ov42.mydomain/system URI... Feb 1 10:34:04 ov42 libvirt-guests.sh: Starting shutdown on guest: HostedEngineLocal If I understood corrctly it seems that libvirtd took in charge the ip assignement, using the default 192.168.122.x network, while my host and my engine should be on 10.4.4.x...?? Currently on host, after the failed deploy, I have: # brctl show bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no ovirtmgmt 8000.001a4a17015d no eth0 virbr0 8000.52540084b832 yes virbr0-nic BTW: on host I have network managed by NetworkManager. It is supported now in upcoming 4.2.1, isn't it? Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Thu Feb 1 10:55:47 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 01 Feb 2018 11:55:47 +0100 Subject: [ovirt-users] No available Host to migrate to In-Reply-To: References: Message-ID: <20180201105547.E6E40804B0@smtp03.mail.de> Hi, Thanks for your answer. I totally missed that part when I added the second host to the cluster. I put it in maintenance, removed it from the cluster and then added it again with this setting. Works a lot better ! Regards Le 01-Feb-2018 10:18:23 +0100, stirabos at redhat.com a crit: On Thu, Feb 1, 2018 at 9:06 AM, wrote: Hi, What are the reasons that can cause this message to appear, in a cluster where most machines are able to migrate without problem ? I have this problem for the Engine VM and managed to solved it for another one which configuration prevented any kind of migration. Regards The engine VM could be migrated just to hosted-engine configured hosts. Do you have another hosted-engine host up and with the required resources to accommodate the engine VM? _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Thu Feb 1 11:00:05 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 1 Feb 2018 12:00:05 +0100 Subject: [ovirt-users] No available Host to migrate to In-Reply-To: <20180201105547.E6E40804B0@smtp03.mail.de> References: <20180201105547.E6E40804B0@smtp03.mail.de> Message-ID: On Thu, Feb 1, 2018 at 11:55 AM, wrote: > Hi, > Thanks for your answer. > I totally missed that part when I added the second host to the cluster. > I put it in maintenance, removed it from the cluster and then added it > again with this setting. Works a lot better ! > > Regards > > > I opened an RFE still in 4.0.5 because I felt it counter-intuitive.... https://bugzilla.redhat.com/show_bug.cgi?id=1399613 see my comments inside the bugzilla. but it was closed as not a bug.... ;-( It could have alleviated you from removing/readding host.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Thu Feb 1 11:25:16 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 1 Feb 2018 12:25:16 +0100 Subject: [ovirt-users] ovirt 4.2.1 and uploading package profile problem In-Reply-To: References: Message-ID: On Thu, Feb 1, 2018 at 10:17 AM, Gianluca Cecchi wrote: > Hello, > I'm testing a 4.2.1 environment that uses a proxy for yum. > > I see that after enabling 4.2.1pre repos on my future host and instaling > ovirt packages, now every yum command gets this message in output: > > Uploading Package Profile > > and after a couple of minutes wait > > Unable to upload Package Profile > > Searching through internet I found posts related to katello... > > Indeed the ovirt install produced installation of katello-agent and > katello-agent-fact-plugin rpms (version 2.9.0.1-1) > > How can I solve this? Do I need to put the proxy anywhere or do I have to > disable any not necessary service? > > Thanks, > > Gianluca > What I notice is on a CentOS 7 host with 4.1.9 I have these yum plugins listed when I run "yum update" Loaded plugins: fastestmirror, langpacks Instead on this CentOS 7 host where I enabled the 4.2.1pre repos I get Loaded plugins: fastestmirror, langpacks, package_upload, product-id, search-disabled-repos, : subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. During install I got in yum.log: . . . Jan 29 15:44:59 Installed: libvirt-daemon-driver-interface-3.2.0-14.el7_4.7.x86_64 Jan 29 15:45:00 Installed: vhostmd-0.5-12.el7.x86_64 Jan 29 15:45:00 Installed: vdsm-hook-vhostmd-4.20.17-1.el7.centos.noarch Jan 29 15:45:00 Installed: dnsmasq-2.76-2.el7_4.2.x86_64 Jan 29 15:45:00 Installed: python-netifaces-0.10.4-3.el7.x86_64 Jan 29 15:45:00 Installed: python-rhsm-certificates-1.19.10-1.el7_4.x86_64 Jan 29 15:45:00 Installed: python-rhsm-1.19.10-1.el7_4.x86_64 Jan 29 15:45:01 Installed: subscription-manager-1.19.23-1.el7.centos.x86_64 Jan 29 15:45:01 Installed: katello-agent-fact-plugin-2.9.0.1-1.el7.noarch Jan 29 15:45:01 Installed: usbredir-0.7.1-2.el7.x86_64 Jan 29 15:45:01 Installed: scrub-2.5.2-7.el7.x86_64 . . . The katello-agent rpm contains: /etc/yum/pluginconf.d/package_upload.conf and as seen above it was also installed subscription-manager-1.19.23-1.el7.centos.x86_64 that puts: /etc/yum/pluginconf.d/subscription-manager.conf /etc/yum/pluginconf.d/product-id.conf /etc/yum/pluginconf.d/search-disabled-repos.conf Reasons? Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From crl.langlois at gmail.com Thu Feb 1 11:40:26 2018 From: crl.langlois at gmail.com (carl langlois) Date: Thu, 1 Feb 2018 06:40:26 -0500 Subject: [ovirt-users] Engine VM cannot be migrated In-Reply-To: References: <20180131175040.BCCB080757@smtp03.mail.de> Message-ID: When installing your host make sure you select the deploy in the hosted engine tab. You should just re install it with this option. Carl Le 31 janv. 2018 12:51, a ?crit : Hi, What can prevent the hosted engine VM from being migrated ? I can migrate any other VM in the cluster (same servers), and it is supposed to be configured to be migrated manually or automatically as far as I can see. Regards _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Harry.Mallon at codex.online Thu Feb 1 12:43:00 2018 From: Harry.Mallon at codex.online (Harry Mallon) Date: Thu, 1 Feb 2018 12:43:00 +0000 Subject: [ovirt-users] Using upstream QEMU In-Reply-To: References: Message-ID: <039252F9-F2C8-45B0-A55E-2AE5E70AE02B@codex.online> Apologies for the formatting in the following message, I can't get Office for Mac to play ball... Harry Mallon CODEX | Senior Software Engineer 60 Poland Street | London | England | W1F 7NT E harry.mallon at codex.online | T +44 203 7000 989 On 01/02/2018, 09:28, "Michal Skrivanek" wrote: > On 31 Jan 2018, at 16:53, Yedidyah Bar David wrote: > > On Wed, Jan 31, 2018 at 5:43 PM, Harry Mallon wrote: >> Hello all, >> >> Has anyone used oVirt with non-oVirt provided QEMU versions? >> I need a feature provided by upstream QEMU, but it is disabled in the oVirt/CentOS7 QEMU RPM. just curious - which one? Typically the reason for disabling it is that it?s not really stable I am trying to run OSX guests on a host (Apple hardware). The "applesmc" device is part of that puzzle and is disabled in the Red Hat QEMU. >> >> I have two possible methods to avoid the issue: >> 1. Fedora has a more recent QEMU which is closer to 'stock'. I see that oVirt 4.2 has no Fedora support, > > Indeed, mostly > >> but is it possible to install the host onto a Fedora machine? > > Didn't try this recently, but it might require not-too-much work with > fc25 or so. > IIRC fc27 is python3-only, and this will require more work (which is > ongoing, but > don't hold your breath). > >> I am trying to use the master branch rpms as recommended in the "No Fedora Support" note with no luck currently. > > Another option is to try to rebuild the fedora srpm for CentOS 7. > >> 2. Is it safe/sensible to use oVirt with a CentOS7 host running an upstream QEMU version? > > No idea. If it's only for development/testing, I'd say give it a try. it?s as sensible as running any bleeding edge stuff. It should work if you manage to resolve qemu deps you could also rebuild qemu-kvm-ev without the patch which blacklists the feature you want to see. I was able to patch and rebuild qemu-kvm-ev, but I think I have hit more problems using the patched firmware from here: http://www.contrib.andrew.cmu.edu/~somlo/OSXKVM/. Hopefully that gets me a little closer though. Getting these mac VMs to work has so far been a huge pain. Thanks, michal > >> >> Thanks, >> Harry >> >> >> Harry Mallon >> CODEX | Senior Software Engineer >> 60 Poland Street | London | England | W1F 7NT >> E harry.mallon at codex.online | T +44 203 7000 989 >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > > -- > Didi > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > Harry From oourfali at redhat.com Thu Feb 1 12:53:46 2018 From: oourfali at redhat.com (Oved Ourfali) Date: Thu, 1 Feb 2018 14:53:46 +0200 Subject: [ovirt-users] New oVirt blog post - oVirt 4.2.2 web admin UI browser bookmarks In-Reply-To: References: Message-ID: Looks awesome! On Thu, Feb 1, 2018 at 10:16 AM, Yaniv Kaul wrote: > oVirt web admin UI now allows the user to bookmark all entities and > searches using their browser. > > Full blog post @ https://ovirt.org/blog/2018/01/ovirt-admin-bookmarks/ > > Y. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pym0914 at 163.com Thu Feb 1 12:06:31 2018 From: pym0914 at 163.com (Pym) Date: Thu, 1 Feb 2018 20:06:31 +0800 (CST) Subject: [ovirt-users] After realizing HA migration, the virtual machine can still get the virtual machine information by using the "vdsm-client host getVMList" instruction on the host before the migration. In-Reply-To: References: <22ca0fd1.aee0.1614c0a03df.Coremail.pym0914@163.com> <42d98cc1.163e.1614ef43bd6.Coremail.pym0914@163.com> Message-ID: <6c18742a.bb33.1615142da93.Coremail.pym0914@163.com> The environment on my side may be different from the link. My VM1 can be used normally after it is started on host2, but there is still information left on host1 that is not cleaned up. Only the interface and background can still get the information of vm1 on host1, but the vm2 has been successfully started on host2, with the HA function. I would like to ask a question, whether the UUID of the virtual machine is stored in the database or where is it maintained? Is it not successfully deleted after using the HA function? 2018-02-01 16:12:16?"Simone Tiraboschi" ? On Thu, Feb 1, 2018 at 2:21 AM, Pym wrote: I checked the vm1, he is keep up state, and can be used, but on host1 has after shutdown is a suspended vm1, this cannot be used, this is the problem now. In host1, you can get the information of vm1 using the "vdsm-client Host getVMList", but you can't get the vm1 information using the "virsh list". Maybe a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1505399 Arik? 2018-02-01 07:16:37?"Simone Tiraboschi" ? On Wed, Jan 31, 2018 at 12:46 PM, Pym wrote: Hi: The current environment is as follows: Ovirt-engine version 4.2.0 is the source code compilation and installation. Add two hosts, host1 and host2, respectively. At host1, a virtual machine is created on vm1, and a vm2 is created on host2 and HA is configured. Operation steps: Use the shutdown -r command on host1. Vm1 successfully migrated to host2. When host1 is restarted, the following situation occurs: The state of the vm2 will be shown in two images, switching from up and pause. When I perform the "vdsm-client Host getVMList" in host1, I will get the information of vm1. When I execute the "vdsm-client Host getVMList" in host2, I will get the information of vm1 and vm2. When I do "virsh list" in host1, there is no virtual machine information. When I execute "virsh list" at host2, I will get information of vm1 and vm2. How to solve this problem? Is it the case that vm1 did not remove the information on host1 during the migration, or any other reason? Did you also check if your vms always remained up? In 4.2 we have libvirt-guests service on the hosts which tries to properly shutdown the running VMs on host shutdown. Thank you. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34515 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 35900 bytes Desc: not available URL: From gianluca.cecchi at gmail.com Thu Feb 1 15:57:53 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 1 Feb 2018 16:57:53 +0100 Subject: [ovirt-users] kernel perf messages should I warn? Message-ID: Hello, I have passed storage san of a 4.1.8 test env from an HP MSA2000 to an IBM V3700 I'm doing some basic tests inside a VM and sometimes I see this kind of messages kernel: perf: interrupt took too long (6813 > 6810), lowering kernel.perf_event_max_sample_rate to 29000 that I didn't see when using the MSA Should I worry about them? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Thu Feb 1 18:19:01 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 1 Feb 2018 19:19:01 +0100 Subject: [ovirt-users] ovirt 4.2.1 pre hosted engine deploy failure In-Reply-To: References: Message-ID: On Thu, Feb 1, 2018 at 11:31 AM, Gianluca Cecchi wrote: > On Wed, Jan 31, 2018 at 11:48 AM, Simone Tiraboschi > wrote: > >> >> >> Ciao Gianluca, >> we have an issue logging messages with special unicode chars from >> ansible, it's tracked here: >> https://bugzilla.redhat.com/show_bug.cgi?id=1533500 >> but this is just hiding your real issue. >> >> I'm almost sure that you are facing an issue writing on NFS and thwn dd >> returns us an error message with \u2018 and \u2019. >> Can you please check your NFS permissions? >> >> > > Ciao Simone, thanks for answering. > I think you were right. > Previously I had this: > > /nfs/SHE_DOMAIN *(rw) > > Now I have changed to: > > /nfs/SHE_DOMAIN *(rw,anonuid=36,anongid=36,all_squash) > > I restarted the deploy with the answer file > > # hosted-engine --deploy --config-append=/var/lib/ > ovirt-hosted-engine-setup/answers/answers-20180129164431.conf > > and it went ahead... and I have contents inside the directory: > > # ll /nfs/SHE_DOMAIN/a0351a82-734d-4d9a-a75e-3313d2ffe23a/ > total 12 > drwxr-xr-x. 2 vdsm kvm 4096 Jan 29 16:40 dom_md > drwxr-xr-x. 6 vdsm kvm 4096 Jan 29 16:43 images > drwxr-xr-x. 4 vdsm kvm 4096 Jan 29 16:40 master > > But it ended with a problem regarding engine vm: > > [ INFO ] TASK [Wait for engine to start] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Set engine pub key as authorized key without validating > the TLS/SSL certificates] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Force host-deploy in offline mode] > [ INFO ] changed: [localhost] > [ INFO ] TASK [include_tasks] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Obtain SSO token using username/password credentials] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Add host] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Wait for the host to become non operational] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Get virbr0 routing configuration] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Get ovirtmgmt route table id] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Check network configuration] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Clean network configuration] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Restore network configuration] > [ INFO ] changed: [localhost] > [ INFO ] TASK [Wait for the host to be up] > [ ERROR ] Error: Failed to read response. > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": > false, "msg": "Failed to read response."} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO ] Stage: Clean up > [ INFO ] Cleaning temporary resources > [ INFO ] TASK [Gathering Facts] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Remove local vm dir] > [ INFO ] ok: [localhost] > [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- > setup/answers/answers-20180201104600.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: this system is not reliable, > please check the issue,fix and redeploy > Log file is located at /var/log/ovirt-hosted-engine- > setup/ovirt-hosted-engine-setup-20180201102603-1of5a1.log > > Under /var/log/libvirt/qemu of host from where I'm running the > hosted-engine deploy I see this > > > 2018-02-01 09:29:05.515+0000: starting up libvirt version: 3.2.0, package: > 14.el7_4.7 (CentOS BuildSystem , > 2018-01-04-19:31:34, c1bm.rdu2.centos.org), qemu version: > 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.13.1), hostname: ov42.mydomain > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin > QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name > guest=HostedEngineLocal,debug-threads=on -S -object > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-1- > HostedEngineLocal/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off > -cpu Westmere,+kvmclock -m 6184 -realtime mlock=off -smp > 1,sockets=1,cores=1,threads=1 -uuid 8c8f8163-5b69-4ff5-b67c-07b1a9b8f100 > -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/ > var/lib/libvirt/qemu/domain-1-HostedEngineLocal/monitor.sock,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc > -no-shutdown -global PIIX4_PM.disable_s3=1 -global PIIX4_PM.disable_s4=1 > -boot menu=off,strict=on -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 > -drive file=/var/tmp/localvm1ClXud/images/918bbfc1-d599-4170- > 9a92-1ac417bf7658/bb8b3078-fddb-4ce3-8da0-0a191768a357, > format=qcow2,if=none,id=drive-virtio-disk0 -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive- > virtio-disk0,id=virtio-disk0,bootindex=1 -drive > file=/var/tmp/localvm1ClXud/seed.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on > -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev > tap,fd=26,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-pci,netdev= > hostnet0,id=net0,mac=00:16:3e:15:7b:27,bus=pci.0,addr=0x3 -chardev > pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 > -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/ > target/domain-1-HostedEngineLocal/org.qemu.guest_agent.0,server,nowait > -device virtserialport,bus=virtio-serial0.0,nr=1,chardev= > charchannel0,id=channel0,name=org.qemu.guest_agent.0 -vnc 127.0.0.1:0 > -device VGA,id=video0,vgamem_mb=16,bus=pci.0,addr=0x2 -object > rng-random,id=objrng0,filename=/dev/random -device > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on > 2018-02-01T09:29:05.771459Z qemu-kvm: -chardev pty,id=charserial0: char > device redirected to /dev/pts/3 (label charserial0) > 2018-02-01T09:34:19.445774Z qemu-kvm: terminating on signal 15 from pid > 6052 (/usr/sbin/libvirtd) > 2018-02-01 09:34:19.668+0000: shutting down, reason=shutdown > > In /var/log/messages: > > Feb 1 10:29:05 ov42 systemd-machined: New machine > qemu-1-HostedEngineLocal. > Feb 1 10:29:05 ov42 systemd: Started Virtual Machine > qemu-1-HostedEngineLocal. > Feb 1 10:29:05 ov42 systemd: Starting Virtual Machine > qemu-1-HostedEngineLocal. > Feb 1 10:29:05 ov42 kvm: 1 guest now active > Feb 1 10:29:06 ov42 python: ansible-command Invoked with warn=True > executable=None _uses_shell=True > _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 > | awk '{ print $5 }' | cut > -f1 -d'/' removes=None creates=None chdir=None stdin=None > Feb 1 10:29:07 ov42 kernel: virbr0: port 2(vnet0) entered learning state > Feb 1 10:29:09 ov42 kernel: virbr0: port 2(vnet0) entered forwarding state > Feb 1 10:29:09 ov42 kernel: virbr0: topology change detected, propagating > Feb 1 10:29:09 ov42 NetworkManager[749]: [1517477349.5180] device > (virbr0): link connected > Feb 1 10:29:16 ov42 python: ansible-command Invoked with warn=True > executable=None _uses_shell=True > _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 > | awk '{ print $5 }' | cut > -f1 -d'/' removes=None creates=None chdir=None stdin=None > Feb 1 10:29:27 ov42 python: ansible-command Invoked with warn=True > executable=None _uses_shell=True > _raw_params=virsh -r net-dhcp-leases default | grep -i 00:16:3e:15:7b:27 > | awk '{ print $5 }' | cut > -f1 -d'/' removes=None creates=None chdir=None stdin=None > Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPDISCOVER(virbr0) > 00:16:3e:15:7b:27 > Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPOFFER(virbr0) 192.168.122.200 > 00:16:3e:15:7b:27 > Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPREQUEST(virbr0) > 192.168.122.200 00:16:3e:15:7b:27 > Feb 1 10:29:30 ov42 dnsmasq-dhcp[6322]: DHCPACK(virbr0) 192.168.122.200 > 00:16:3e:15:7b:27 > . . . > Feb 1 10:34:00 ov42 systemd: Starting Virtualization daemon... > Feb 1 10:34:00 ov42 python: ansible-ovirt_hosts_facts Invoked with > pattern=name=ov42.mydomain status=up fetch_nested=False > nested_attributes=[] auth={'ca_file': None, 'url': ' > https://ov42she.mydomain/ovirt-engine/api', 'insecure': True, 'kerberos': > False, 'compress': True, 'headers': None, 'token': 'GOK2wLFZ0PIs1GbXVQjNW- > yBlUtZoGRa2I92NkCkm6lwdlQV-dUdP5EjInyGGN_zEVEHFKgR6nuZ-eIlfaM_lw', > 'timeout': 0} > Feb 1 10:34:03 ov42 systemd: Started Virtualization daemon. > Feb 1 10:34:03 ov42 systemd: Reloading. > Feb 1 10:34:03 ov42 systemd: [/usr/lib/systemd/system/ip6tables.service:3] > Failed to add dependency on syslog.target,iptables.service, ignoring: > Invalid argument > Feb 1 10:34:03 ov42 systemd: Cannot add dependency job for unit > lvm2-lvmetad.socket, ignoring: Unit is masked. > Feb 1 10:34:03 ov42 systemd: Starting Cockpit Web Service... > Feb 1 10:34:03 ov42 dnsmasq[6322]: read /etc/hosts - 4 addresses > Feb 1 10:34:03 ov42 dnsmasq[6322]: read /var/lib/libvirt/dnsmasq/default.addnhosts > - 0 addresses > Feb 1 10:34:03 ov42 dnsmasq-dhcp[6322]: read /var/lib/libvirt/dnsmasq/ > default.hostsfile > Feb 1 10:34:03 ov42 systemd: Started Cockpit Web Service. > Feb 1 10:34:03 ov42 cockpit-ws: Using certificate: > /etc/cockpit/ws-certs.d/0-self-signed.cert > Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: info : > libvirt version: 3.2.0, package: 14.el7_4.7 (CentOS BuildSystem < > http://bugs.centos.org>, 2018-01-04-19:31:34, c1bm.rdu2.centos.org) > Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: info : > hostname: ov42.mydomain > Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.840+0000: 6076: error : > virDirOpenInternal:2829 : cannot open directory '/var/tmp/localvm7I0SSJ/ > images/918bbfc1-d599-4170-9a92-1ac417bf7658': No such file or directory > Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : > storageDriverAutostart:204 : internal error: Failed to autostart storage > pool '918bbfc1-d599-4170-9a92-1ac417bf7658': cannot open directory > '/var/tmp/localvm7I0SSJ/images/918bbfc1-d599-4170-9a92-1ac417bf7658': No > such file or directory > Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : > virDirOpenInternal:2829 : cannot open directory '/var/tmp/localvm7I0SSJ': > No such file or directory > Feb 1 10:34:03 ov42 libvirtd: 2018-02-01 09:34:03.841+0000: 6076: error : > storageDriverAutostart:204 : internal error: Failed to autostart storage > pool 'localvm7I0SSJ': cannot open directory '/var/tmp/localvm7I0SSJ': No > such file or directory > Feb 1 10:34:03 ov42 systemd: Stopping Suspend/Resume Running libvirt > Guests... > Feb 1 10:34:04 ov42 libvirt-guests.sh: Running guests on > qemu+tls://ov42.mydomain/system URI: HostedEngineLocal > Feb 1 10:34:04 ov42 libvirt-guests.sh: Shutting down guests on > qemu+tls://ov42.mydomain/system URI... > Feb 1 10:34:04 ov42 libvirt-guests.sh: Starting shutdown on guest: > HostedEngineLocal > You definitively hit this one: https://bugzilla.redhat.com/show_bug.cgi?id=1539040 host-deploy stops libvirt-guests triggering a shutdown of all the running VMs (including HE one) We rebuilt host-deploy with a fix for that today. It affects only the host where libvirt-guests has already been configured by a 4.2 host-deploy in the past. As a workaround you have to manually stop libvirt-guests before and deconfigure it on /etc/sysconfig/libvirt-guests.conf before running hosted-engine-setup again. > If I understood corrctly it seems that libvirtd took in charge the ip > assignement, using the default 192.168.122.x network, while my host and my > engine should be on 10.4.4.x...?? > This is absolutely fine. Let me explain: with the new ansible based flow we completely reverted the hosted-engine deployment flow. In the past hosted-engine-setup was directly preparing the host, the storage, the network and a VM in advance via vdsm and the user was waiting for the engine at the to auto-import everything with a lot of possible issues in the middle. Now hosted-engine-setup, doing everything via ansible, bootstraps a local VM on local storage over the default natted libvirt network (that's why you temporary see that address) and it deploys ovirt-engine there. Then hosted-engine-setup will use the engine running on the bootstrap local VM to set up everything else (storage, network, vm...) using the well know and tested engine APIs. Only at the end it migrates the disk of the local VM over the disk created by engine on the shared storage and ovirt-ha-agent will boot the engine VM from as usual. More than that, at this point we don't need auto-import code on engine side since all the involved entities are already know by the engine since it created them. > Currently on host, after the failed deploy, I have: > > # brctl show > bridge name bridge id STP enabled interfaces > ;vdsmdummy; 8000.000000000000 no > ovirtmgmt 8000.001a4a17015d no eth0 > virbr0 8000.52540084b832 yes virbr0-nic > > BTW: on host I have network managed by NetworkManager. It is supported now > in upcoming 4.2.1, isn't it? > Yes, it is. > > Gianluca > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlawrence at squaretrade.com Thu Feb 1 18:43:49 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Thu, 1 Feb 2018 10:43:49 -0800 Subject: [ovirt-users] Upgrade via reinstall? In-Reply-To: References: <2A4EF9EB-209B-4248-952B-879F60EC9A55@squaretrade.com> Message-ID: <743B6D75-45A8-4A2D-BCCA-A25B173FA163@squaretrade.com> > On Feb 1, 2018, at 1:21 AM, Yaniv Kaul wrote: > The engine-backup utility is your friend and will properly back up for you everything you need. Thanks, but that is an answer to a different question I didn't ask. It does, implicitly, seem to indicate that there probably are artifacts on hosts that need to be preserved, and hopefully I can find specifics in that script . -j From ehaas at redhat.com Thu Feb 1 20:24:38 2018 From: ehaas at redhat.com (Edward Haas) Date: Thu, 1 Feb 2018 22:24:38 +0200 Subject: [ovirt-users] Node network setup In-Reply-To: <20180130190727.18DE6801D5@smtp03.mail.de> References: <20180130190727.18DE6801D5@smtp03.mail.de> Message-ID: It is not clear to me what you are attempting to do exactly, but networking settings should be handled through the setup networks window on Engine. (Network->Hosts->->[specific host]->Interface tab -> SetupNetwork You can then define bonds by dragging the nics one over the other. Thanks, Edy. On Tue, Jan 30, 2018 at 9:07 PM, wrote: > Hi, > I am trying to setup a cluster of two nodes, with self hoste Engine. > Things went fine for the first machine, but it as rather messy about the > second one. > I would like to have load balancing and failover for both management > network and storage (NFS repository). > > So what should I exactly do to get a working network stack which can be > recognized when I try to add this host to the cluster ? > > Have tried configuring bonds and briges using Cockpit, using manual > "ifcfg" files, but all the time I see the bridges and the bonds not linked > in the Engine interface, so the new host cannot be enrolled. > If I try to link "ovirtmgmt" to the the associated bond, I have a > connectivity loss because it is the management device, and I have te > restart the network services. As management configuration is not OK, I > can't setup the storage connection. > > And if I just try to activate the host, I will install and configure > things and then complain about missing "ovirtmgmt" and "nfs" networks, > which both exist and work and Centos level. > > The interface, bonds and bridge names are copy/paste from the first server. > > # brctl show ovirtmgmt > bridge name bridge id STP enabled interfaces > ovirtmgmt 8000.44a842394200 no bond0 > # ip addr show bond0 > 33: bond0: mtu 1500 qdisc > noqueue master ovirtmgmt state UP qlen 1000 > link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff > inet6 fe80::46a8:42ff:fe39:4200/64 scope link > valid_lft forever preferred_lft forever > # ip addr show em1 > 2: em1: mtu 1500 qdisc mq master > bond0 state UP qlen 1000 > link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff > # ip addr show em3 > 4: em3: mtu 1500 qdisc mq master > bond0 state UP qlen 1000 > link/ether 44:a8:42:39:42:00 brd ff:ff:ff:ff:ff:ff > > By the way, is it mandatory to stop and disable NetworkManager or not ? > > Thanks for any kind of help :-) > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From farkey_2000 at yahoo.com Fri Feb 2 01:31:50 2018 From: farkey_2000 at yahoo.com (Andy) Date: Fri, 2 Feb 2018 01:31:50 +0000 (UTC) Subject: [ovirt-users] Unable to deploy hosted engine to other hosts References: <2142307635.1745246.1517535111701.ref@mail.yahoo.com> Message-ID: <2142307635.1745246.1517535111701@mail.yahoo.com> Support, I am having a problem with redeploying the hosted engine to the second and third host in the cluster.? This setup is from a clean install of 4.2 and all the hosts are up and functional in the engine.? When I try to deploy the engine the install fails.? Checking the VDSM service I get: vdsm[4752]: WARN Worker blocked: at 0x2807590> timeout=15, duration=270 at 0x1e0bc90> task#=186 at 0x27f0590> The setup is a three host ovirt 4.2 running CentOS 7.4, with three gluster volumes (engine and two data).? I have attached the agent, broker, ovirt setup, and vdsm logs.? Any help is appreciated. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logs.tar.gz Type: application/gzip Size: 2677352 bytes Desc: not available URL: From ddqlo at 126.com Fri Feb 2 03:46:07 2018 From: ddqlo at 126.com (=?GBK?B?tq3H4MH6?=) Date: Fri, 2 Feb 2018 11:46:07 +0800 (CST) Subject: [ovirt-users] active directory and sso In-Reply-To: References: <1c5cffbd.6cd0.161506d46dc.Coremail.ddqlo@126.com> Message-ID: <2cf9122f.38ed.161549f1515.Coremail.ddqlo@126.com> Thanks for the reply. I have completely configured all the things in option 1 which you told. But it seems that sso still does not work. My domain forest is "test.org" and my user is "test". When I login the user portal, I get "test at test.org@test.org" int the top right corner. Should it be "test at test.org"? Is it possible that engine send wrong user name to the guest agent? At 2018-02-01 15:35:57, "Martin Perina" wrote: On Thu, Feb 1, 2018 at 9:13 AM, ??? wrote: Hi, all I am trying to make SSO working with windows7 vm in an ovirt 4.1 environment. Ovirt-guest-agent has been installed in windows7 vm. I have an active directory server of windows2012 and I have configured the engine using "ovirt-engine-extension-aaa-ldap-setup" successfully. The windows7 vm has joined the domain,too. But when I login the userportal using a user created in the AD server, I still have to login the windows7 vm using the same user for the second time. It seems that SSO does not work. Anyone can help me? Thanks! We are not providing full SSO for VMs . At the moment you have 2 options: 1. If you want user to be automatically logged in into a VM, then you need to setup SSO using aaa-ldap extension for AD (please don't forget to answer Yes for question about SSO for VMs in setup tool). Andf of course in a VM you need to have installed and enabled guest agent. Once user logs into VM Portal and clicks on a VM, then he should be automatically logged into it. 2. If you setup kerberos for engine SSO, then you don't need to enter password to loging into VM Portal, but in such case we cannot pass a password into a VM and user are not automatically logged in. Martin _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??2.png Type: image/png Size: 2736 bytes Desc: not available URL: From mperina at redhat.com Fri Feb 2 07:46:07 2018 From: mperina at redhat.com (Martin Perina) Date: Fri, 2 Feb 2018 08:46:07 +0100 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: On Fri, Feb 2, 2018 at 5:40 AM, Terry hey wrote: > Dear Martin, > > Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to setup oVirt > Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. Therefore, i would > like to ask whether oVirt power management support iLO5 or not. > ?We don't have any hardware with iLO5 available, but there is a good chance that it will be compatible with iLO4. Have you tried to setup your server with iLO4? Does the Test in Edit fence agent dialog work?? If not could you please try to install fence-agents-all package on different host and execute following: fence_ilo4 -a -l -p -v -o status and share the output? Thanks Martin > If not, do you have any idea to setup power management with HP G10? > > Regards, > Terry > > 2018-02-01 16:21 GMT+08:00 Martin Perina : > >> >> >> On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < >> lorenzetto.luca at gmail.com> wrote: >> >>> Hi, >>> >>> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. Try >>> using the standard ipmi. >>> >> >> ?It's not just an alias, ilo3/ilo4 also have different defaults than >> ipmilan. For example if you use ilo4, then by default following is used: >> >> ? >> >> ?lanplus=1 >> power_wait=4 >> >> ?So I recommend to start with ilo4 and add any necessary custom options >> into Options field. If you need some custom >> options, could you please share them with us? It would be very helpful >> for us, if needed we could introduce ilo5 with >> different defaults then ilo4 >> >> Thanks >> >> Martin >> >> >>> Luca >>> >>> >>> >>> Il 31 gen 2018 11:14 PM, "Terry hey" ha scritto: >>> >>>> Dear all, >>>> Did oVirt 4.2 Power management support iLO5 as i could not see iLO5 >>>> option in Power Management. >>>> >>>> Regards >>>> Terry >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Martin Perina >> Associate Manager, Software Engineering >> Red Hat Czech s.r.o. >> > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperina at redhat.com Fri Feb 2 07:50:49 2018 From: mperina at redhat.com (Martin Perina) Date: Fri, 2 Feb 2018 08:50:49 +0100 Subject: [ovirt-users] active directory and sso In-Reply-To: <2cf9122f.38ed.161549f1515.Coremail.ddqlo@126.com> References: <1c5cffbd.6cd0.161506d46dc.Coremail.ddqlo@126.com> <2cf9122f.38ed.161549f1515.Coremail.ddqlo@126.com> Message-ID: On Fri, Feb 2, 2018 at 4:46 AM, ??? wrote: > Thanks for the reply. I have completely configured all the things in > option 1 which you told. But it seems that sso still does not work. My > domain forest is "test.org" and my user is "test". When I login the user > portal, I get "test at test.org@test.org" int the top right corner. Should > it be "test at test.org"? > ?This? is fine, for AD we are using UPN as username (in your case ' test at test.org') and we concatenate this with authz extension name (in your case '@test.org'). Is it possible that engine send wrong user name to the guest agent? > > ?Could you please share engine.log from, after you try to login to VM Portal and open console to the VM to investigate? Thanks Martin At 2018-02-01 15:35:57, "Martin Perina" wrote: > > > > On Thu, Feb 1, 2018 at 9:13 AM, ??? wrote: > >> Hi, all >> I am trying to make SSO working with windows7 vm in an ovirt 4.1 >> environment. Ovirt-guest-agent has been installed in windows7 vm. I have an >> active directory server of windows2012 and I have configured the engine >> using "ovirt-engine-extension-aaa-ldap-setup" successfully. The windows7 >> vm has joined the domain,too. But when I login the userportal using a user >> created in the AD server, I still have to login the windows7 vm using the >> same user for the second time. It seems that SSO does not work. >> Anyone can help me? Thanks! >> > > We are not providing full SSO for > VMs > . At the moment you have 2 options: > > 1. If you want user to be automatically logged in into a VM, then you need > to setup SSO using aaa-ldap extension for AD (please don't forget to answer > Yes for question about SSO for VMs in setup tool). Andf of course in a VM > you need to have installed and enabled guest agent. Once user logs into VM > Portal and clicks on a VM, then he should be automatically logged into it. > > 2. If you setup kerberos for engine SSO, then you don't need to enter > password to loging into VM Portal, but in such case we cannot pass a > password into a VM and user are not automatically logged in. > > Martin > > >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > > > > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??2.png Type: image/png Size: 2736 bytes Desc: not available URL: From 814280054 at qq.com Fri Feb 2 07:50:04 2018 From: 814280054 at qq.com (814280054 at qq.com) Date: Fri, 2 Feb 2018 15:50:04 +0800 Subject: [ovirt-users] qxl win7 photoshop cc 2014 Message-ID: <2018020215500418008311@qq.com>+DC5D6F3EA389CC8A I test qxl driver: I create a win7 x64 virtual machine in kvm. And install photoshop cc 2014 in win7. when i use selecte tools, generate many lines which are not needed. Can you help me ? CHEN Zhiguo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Catch(02-02-15-49-48).jpg Type: image/jpeg Size: 765647 bytes Desc: not available URL: From mperina at redhat.com Fri Feb 2 08:36:57 2018 From: mperina at redhat.com (Martin Perina) Date: Fri, 2 Feb 2018 09:36:57 +0100 Subject: [ovirt-users] Apache Directory Server In-Reply-To: References: Message-ID: On Wed, Jan 24, 2018 at 1:35 PM, C Williams wrote: > Hello, > > Thanks for getting back with me ! > > Here is some info > > 1. Does it use RFC2307 as the schema or something else? > > I have tried various flavors of the RFC2307 pre-set configs . I think > I,ve tried most of these .. > > 1 - 389ds > 2 - 389ds RFC-2307 Schema > > 4 - IBM Security Directory Server > 5 - IBM Security Directory Server RFC-2307 Schema > > 7 - Novell eDirectory RFC-2307 Schema > 8 - OpenLDAP RFC-2307 Schema > 9 - OpenLDAP Standard Schema > 10 - Oracle Unified Directory RFC-2307 Schema > 11 - RFC-2307 Schema (Generic) > 12 - RHDS > 13 - RHDS RFC-2307 Schema > 14 - iPlanet > ?Those profiles were created for servers we have tested, but it's highly probable that you will need a completely new profile for Apache DS. Due to this you cannot use setup tool, but you need to perform manual configuration as described in /usr/share/doc/ovirt-engine-extension-aaa-ldap-1.3.6/README. > > 2. What is the attribute name specifying available base DNs? > > dc=,dc=com > ?No, this is the DN, but we need to know the name of attribute within LDAP which contains the list of existing base DNs. For example for 389ds server using RFC2307 this information is stored in defaultNamingContext attribute (for details you can take a look at /usr/share/ovirt-engine-extension-aaa-ldap/profiles/rfc2307-389ds.properties). ? > > > 3. What is the attribute name specifying unique ID of a record? > > dn: uid=,ou=users,dc=,dc=com > ?No, this is the DN, but each record in LDAP is usually uniquely identified by special attribute (so for example you can move record to different DN). For example for 389ds server using RFC2307 this unique identified is stored in nsUniqueId attribute (for details you can take a look at /usr/share/ovirt-engine-extension-aaa-ldap/profiles/rfc2307-389ds.properties). ? Above information should be available somewhere in Apache DS documention.? > More on this ... > > I changed the following in /usr/share/ovirt-engine- > extension-aaa-ldap/setup/plugins/ovirt-engine-extension-aaa-ldap/ldap/ > common.py to meet their need for port 10389 ... > > 636 if self.environment[ > constants.LDAPEnv.PROTOCOL > ] == 'ldaps' > #else (389 if port is None else port) > else (10389 if port is None else port) > > ?Please don't? ?do that, files in /usr/share are readonly for users and all changes will be overwritten by next update ? ? > I also injected the following into the /var/tmp/*profile.properties" > prior to testing user authentication using the setup tool > ?Yes, that's the right way, if you need to change something, but you need to perform those changes in /etc/ovirt-engine/aaa directory, /var/tmp is used only as temporary directory for setup tool. > vars.port = 10389 > pool.default.serverset.single.port = ${global:vars.port} > > > Thank You for Helping !! > > Charles Williams > > > > On Wed, Jan 24, 2018 at 3:50 AM, Martin Perina wrote: > >> Hi, >> >> officially we don't support Apache DS, but aaa-ldap is quite extensible >> so it should be possible attach it to oVirt. >> As we don't have Apache DS installed, could you please provide us >> following information? >> >> 1. Does it use RFC2307 as the schema or something else? >> 2. What is the attribute name specifying available base DNs? >> 3. What is the attribute name specifying unique ID of a record? >> >> Ondro, any other information required? >> >> Thanks >> >> Martin >> >> >> On Wed, Jan 24, 2018 at 3:34 AM, C Williams >> wrote: >> >>> Hello, >>> >>> Has anyone successfully connected the ovirt-engine to Apache Directory >>> Server 2.0 ? >>> >>> I have tried the pre-set connections offered by oVirt and have been able >>> to connect to the server on port 10389 after adding the port to a >>> serverset.port. I can query the directory and see users but I cannot log >>> onto the console as a user in the directory. >>> >>> If any one has any experience/guidance on this, please let me know. >>> >>> Thank You >>> >>> Charles Williams >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Martin Perina >> Associate Manager, Software Engineering >> Red Hat Czech s.r.o. >> > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Fri Feb 2 08:54:43 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 2 Feb 2018 09:54:43 +0100 Subject: [ovirt-users] ovirt 4.2.1 pre hosted engine deploy failure In-Reply-To: References: Message-ID: On Thu, Feb 1, 2018 at 7:19 PM, Simone Tiraboschi wrote: > >> You definitively hit this one: > https://bugzilla.redhat.com/show_bug.cgi?id=1539040 > host-deploy stops libvirt-guests triggering a shutdown of all the running > VMs (including HE one) > > We rebuilt host-deploy with a fix for that today. > It affects only the host where libvirt-guests has already been configured > by a 4.2 host-deploy in the past. > As a workaround you have to manually stop libvirt-guests before and > deconfigure it on /etc/sysconfig/libvirt-guests.conf before running > hosted-engine-setup again. > Ok. This is a test env that I want to give to power users to have a feel about 4.2 nw GUI and so I decided to go from scratch redeploying the OS of the host, and with the initial correct nfs permissions all went good at first attempt. Now I have a reachaility problem of engine vm from outside, but it is a differen tproblem and I'm going to open a new thread for it if I don't solve.. > > >> If I understood corrctly it seems that libvirtd took in charge the ip >> assignement, using the default 192.168.122.x network, while my host and my >> engine should be on 10.4.4.x...?? >> > > This is absolutely fine. > Let me explain: with the new ansible based flow we completely reverted the > hosted-engine deployment flow. > > Thanks for the new workflow explanation. Indeed during my change&try options I also tried to destroy and undefine the "default" libvirt network and the deploy complained about it. > >> BTW: on host I have network managed by NetworkManager. It is supported >> now in upcoming 4.2.1, isn't it? >> > > Yes, it is. > > >> >> Ok. I confirm that in my new deploy I let NetworkManager up in host configuration and all went good. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Fri Feb 2 09:05:38 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 2 Feb 2018 10:05:38 +0100 Subject: [ovirt-users] Unable to deploy hosted engine to other hosts In-Reply-To: <2142307635.1745246.1517535111701@mail.yahoo.com> References: <2142307635.1745246.1517535111701.ref@mail.yahoo.com> <2142307635.1745246.1517535111701@mail.yahoo.com> Message-ID: Hi Andy, can you please attach engine.log and host-deploy logs from the engine VM? On Fri, Feb 2, 2018 at 2:31 AM, Andy wrote: > Support, > > I am having a problem with redeploying the hosted engine to the second and > third host in the cluster. This setup is from a clean install of 4.2 and > all the hosts are up and functional in the engine. When I try to deploy > the engine the install fails. > > Checking the VDSM service I get: vdsm[4752]: WARN Worker blocked: name=periodic/3 running object at 0x2807550> at 0x2807590> timeout=15, duration=270 at 0x1e0bc90> > task#=186 at 0x27f0590> > > The setup is a three host ovirt 4.2 running CentOS 7.4, with three gluster > volumes (engine and two data). I have attached the agent, broker, ovirt > setup, and vdsm logs. > > Any help is appreciated. > > Thanks > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Fri Feb 2 09:11:08 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 2 Feb 2018 10:11:08 +0100 Subject: [ovirt-users] Unable to deploy hosted engine to other hosts In-Reply-To: <2142307635.1745246.1517535111701@mail.yahoo.com> References: <2142307635.1745246.1517535111701.ref@mail.yahoo.com> <2142307635.1745246.1517535111701@mail.yahoo.com> Message-ID: 2018-02-02 2:31 GMT+01:00 Andy : > Support, > Hi Andy, just a note this is not Support, this is Community :-) > > I am having a problem with redeploying the hosted engine to the second and > third host in the cluster. This setup is from a clean install of 4.2 and > all the hosts are up and functional in the engine. When I try to deploy > the engine the install fails. > > Checking the VDSM service I get: vdsm[4752]: WARN Worker blocked: name=periodic/3 running object at 0x2807550> at 0x2807590> timeout=15, duration=270 at 0x1e0bc90> > task#=186 at 0x27f0590> > > The setup is a three host ovirt 4.2 running CentOS 7.4, with three gluster > volumes (engine and two data). I have attached the agent, broker, ovirt > setup, and vdsm logs. > > Any help is appreciated. > Simone already replied, adding also Sahina to the loop. > > Thanks > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Fri Feb 2 09:22:40 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 2 Feb 2018 10:22:40 +0100 Subject: [ovirt-users] oVirt Survey 2018 results Message-ID: Thank you very much for having participated in oVirt Survey 2018! Results are now publicly available at http://bit.ly/2Ez909d We're now analyzing results for 4.3 planning. -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Fri Feb 2 09:56:14 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 2 Feb 2018 10:56:14 +0100 Subject: [ovirt-users] ovirt 4.2.1 and uploading package profile problem In-Reply-To: References: Message-ID: 2018-02-01 12:25 GMT+01:00 Gianluca Cecchi : > On Thu, Feb 1, 2018 at 10:17 AM, Gianluca Cecchi < > gianluca.cecchi at gmail.com> wrote: > >> Hello, >> I'm testing a 4.2.1 environment that uses a proxy for yum. >> >> I see that after enabling 4.2.1pre repos on my future host and instaling >> ovirt packages, now every yum command gets this message in output: >> >> Uploading Package Profile >> >> and after a couple of minutes wait >> >> Unable to upload Package Profile >> >> Searching through internet I found posts related to katello... >> >> Indeed the ovirt install produced installation of katello-agent and >> katello-agent-fact-plugin rpms (version 2.9.0.1-1) >> >> How can I solve this? Do I need to put the proxy anywhere or do I have to >> disable any not necessary service? >> >> Thanks, >> >> Gianluca >> > > > What I notice is on a CentOS 7 host with 4.1.9 I have these yum plugins > listed when I run "yum update" > > Loaded plugins: fastestmirror, langpacks > > Instead on this CentOS 7 host where I enabled the 4.2.1pre repos I get > > Loaded plugins: fastestmirror, langpacks, package_upload, product-id, > search-disabled-repos, > : subscription-manager > This system is not registered with an entitlement server. You can use > subscription-manager to register. > > During install I got in yum.log: > . . . > Jan 29 15:44:59 Installed: libvirt-daemon-driver- > interface-3.2.0-14.el7_4.7.x86_64 > Jan 29 15:45:00 Installed: vhostmd-0.5-12.el7.x86_64 > Jan 29 15:45:00 Installed: vdsm-hook-vhostmd-4.20.17-1.el7.centos.noarch > Jan 29 15:45:00 Installed: dnsmasq-2.76-2.el7_4.2.x86_64 > Jan 29 15:45:00 Installed: python-netifaces-0.10.4-3.el7.x86_64 > Jan 29 15:45:00 Installed: python-rhsm-certificates-1.19.10-1.el7_4.x86_64 > Jan 29 15:45:00 Installed: python-rhsm-1.19.10-1.el7_4.x86_64 > Jan 29 15:45:01 Installed: subscription-manager-1.19.23- > 1.el7.centos.x86_64 > Jan 29 15:45:01 Installed: katello-agent-fact-plugin-2.9.0.1-1.el7.noarch > Jan 29 15:45:01 Installed: usbredir-0.7.1-2.el7.x86_64 > Jan 29 15:45:01 Installed: scrub-2.5.2-7.el7.x86_64 > . . . > > The katello-agent rpm contains: > /etc/yum/pluginconf.d/package_upload.conf > > and as seen above it was also installed subscription-manager-1.19.23- > 1.el7.centos.x86_64 > that puts: > > /etc/yum/pluginconf.d/subscription-manager.conf > /etc/yum/pluginconf.d/product-id.conf > /etc/yum/pluginconf.d/search-disabled-repos.conf > > Reasons? > Ciao Gianluca, katello-agent has been added to hosts in 4.1.9[1] to ease integration with foreman/katello also with oVirt Node. if you don't use foreman you can disable the agent and the yum plugin but you may also consider adding katello to your datacenter [1] https://bugzilla.redhat.com/show_bug.cgi?id=1525933 > > Gianluca > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From robnunin at gmail.com Fri Feb 2 10:19:05 2018 From: robnunin at gmail.com (Roberto Nunin) Date: Fri, 2 Feb 2018 11:19:05 +0100 Subject: [ovirt-users] GUI trouble when adding FC datadomain Message-ID: Hi all I'm trying to setup ad HE cluster, with FC domain. HE is also on FC. When I try to add the first domain in the datacenter, I've this form: [image: Immagine incorporata 1] So I'm not able to choose any of the three volumes currently masked towards the chosen host. I've tried all browser I've: Firefox 58, Chrome 63, IE 11, MS Edge, with no changes. Tried to click in the rows, scrolling etc. with no success. Someone has found the same issue ? Thanks in advance -- Roberto Nunin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 65845 bytes Desc: not available URL: From gianluca.cecchi at gmail.com Fri Feb 2 10:23:33 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 2 Feb 2018 11:23:33 +0100 Subject: [ovirt-users] ovirt 4.2.1 and uploading package profile problem In-Reply-To: References: Message-ID: On Fri, Feb 2, 2018 at 10:56 AM, Sandro Bonazzola wrote: > > >> >> The katello-agent rpm contains: >> /etc/yum/pluginconf.d/package_upload.conf >> >> and as seen above it was also installed subscription-manager-1.19.23-1 >> .el7.centos.x86_64 >> that puts: >> >> /etc/yum/pluginconf.d/subscription-manager.conf >> /etc/yum/pluginconf.d/product-id.conf >> /etc/yum/pluginconf.d/search-disabled-repos.conf >> >> Reasons? >> > > Ciao Gianluca, > katello-agent has been added to hosts in 4.1.9[1] to ease integration with > foreman/katello also with oVirt Node. > if you don't use foreman you can disable the agent and the yum plugin but > you may also consider adding katello to your datacenter > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1525933 > > >> >> Actually it seems to me that the culprit of the latencies is subscription-manager plug-in When running yum update on my CentOS 7 node I get this netstat: tcp 0 1 10.4.4.20:39994 209.132.183.108:443 SYN_SENT [root at ov42 ~]# nslookup 209.132.183.108 Server: 10.4.1.11 Address: 10.4.1.11#53 Non-authoritative answer: 108.183.132.209.in-addr.arpa name = subscription.rhsm.redhat.com. and it depends on my proxy settings put in /etc/yum.conf not acquired by subscription manager that uses its own file for these settings.... But my question is: why a CentOS system with oVirt should contact subscription.rhsm.redhat.com? Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Fri Feb 2 10:34:21 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 2 Feb 2018 11:34:21 +0100 Subject: [ovirt-users] ovirt 4.2.1 and uploading package profile problem In-Reply-To: References: Message-ID: 2018-02-02 11:23 GMT+01:00 Gianluca Cecchi : > On Fri, Feb 2, 2018 at 10:56 AM, Sandro Bonazzola > wrote: > >> >> >>> >>> The katello-agent rpm contains: >>> /etc/yum/pluginconf.d/package_upload.conf >>> >>> and as seen above it was also installed subscription-manager-1.19.23-1 >>> .el7.centos.x86_64 >>> that puts: >>> >>> /etc/yum/pluginconf.d/subscription-manager.conf >>> /etc/yum/pluginconf.d/product-id.conf >>> /etc/yum/pluginconf.d/search-disabled-repos.conf >>> >>> Reasons? >>> >> >> Ciao Gianluca, >> katello-agent has been added to hosts in 4.1.9[1] to ease integration >> with foreman/katello also with oVirt Node. >> if you don't use foreman you can disable the agent and the yum plugin but >> you may also consider adding katello to your datacenter >> >> >> [1] https://bugzilla.redhat.com/show_bug.cgi?id=1525933 >> >> >>> >>> > Actually it seems to me that the culprit of the latencies is > subscription-manager plug-in > > When running yum update on my CentOS 7 node I get this netstat: > > tcp 0 1 10.4.4.20:39994 209.132.183.108:443 > SYN_SENT > > [root at ov42 ~]# nslookup 209.132.183.108 > Server: 10.4.1.11 > Address: 10.4.1.11#53 > > Non-authoritative answer: > 108.183.132.209.in-addr.arpa name = subscription.rhsm.redhat.com. > > and it depends on my proxy settings put in /etc/yum.conf not acquired by > subscription manager that uses its own file for these settings.... > > But my question is: why a CentOS system with oVirt should contact > subscription.rhsm.redhat.com? > Agreed, this is not needed. It should contact local datacenter katello instance. Please open a BZ about it > > Gianluca > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Fri Feb 2 11:20:14 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Fri, 2 Feb 2018 12:20:14 +0100 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: <20180125100629.GT2787@redhat.com> References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> Message-ID: Hello Richard, unfortunately upgrading virt-v2v is not an option. Would be nice, but integration with vdsm is not yet ready for that options. On Thu, Jan 25, 2018 at 11:06 AM, Richard W.M. Jones wrote: [cut] > I don't know why it slowed down, but I'm pretty sure it's got nothing > to do with the version of oVirt/RHV. Especially in the initial phase > where it's virt-v2v reading the guest from vCenter. Something must > have changed or be different in the test and production environments. > > Are you converting the same guests? virt-v2v is data-driven, so > different guests require different operations, and those can take > different amount of time to run. > I'm not migrating the same guests, i'm migrating different guest, but most of them share the same os baseline. Most of these vms are from the same RHEL 7 template and have little data difference (few gigs). Do you know which is the performance impact on vcenter? I'd like to tune as best as possible the vcenter to improve the migration time. We have to migrate ~300 guests, and our maintenance window is very short. We don't want continue the migration for months. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From rjones at redhat.com Fri Feb 2 11:52:21 2018 From: rjones at redhat.com (Richard W.M. Jones) Date: Fri, 2 Feb 2018 11:52:21 +0000 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> Message-ID: <20180202115221.GA2787@redhat.com> On Fri, Feb 02, 2018 at 12:20:14PM +0100, Luca 'remix_tj' Lorenzetto wrote: > Hello Richard, > > unfortunately upgrading virt-v2v is not an option. Would be nice, but > integration with vdsm is not yet ready for that options. > > On Thu, Jan 25, 2018 at 11:06 AM, Richard W.M. Jones wrote: > [cut] > > I don't know why it slowed down, but I'm pretty sure it's got nothing > > to do with the version of oVirt/RHV. Especially in the initial phase > > where it's virt-v2v reading the guest from vCenter. Something must > > have changed or be different in the test and production environments. > > > > > Are you converting the same guests? virt-v2v is data-driven, so > > different guests require different operations, and those can take > > different amount of time to run. > > > > I'm not migrating the same guests, i'm migrating different guest, but > most of them share the same os baseline. > Most of these vms are from the same RHEL 7 template and have little > data difference (few gigs). > > Do you know which is the performance impact on vcenter? I'd like to > tune as best as possible the vcenter to improve the migration time. There is a section about this in the virt-v2v man page. I'm on a train at the moment but you should be able to find it. Try to run many conversions, at least 4 or 8 would be good places to start. > We have to migrate ~300 guests, and our maintenance window is very > short. We don't want continue the migration for months. SSH or VDDK method would be far faster but if you can't upgrade you're stuck with https to vCenter. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v From ykaul at redhat.com Fri Feb 2 12:01:28 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 2 Feb 2018 14:01:28 +0200 Subject: [ovirt-users] GUI trouble when adding FC datadomain In-Reply-To: References: Message-ID: Which version are you using? Are you sure the LUNs are empty? Y. On Feb 2, 2018 11:19 AM, "Roberto Nunin" wrote: > Hi all > > I'm trying to setup ad HE cluster, with FC domain. > HE is also on FC. > > When I try to add the first domain in the datacenter, I've this form: > > [image: Immagine incorporata 1] > > So I'm not able to choose any of the three volumes currently masked > towards the chosen host. > I've tried all browser I've: Firefox 58, Chrome 63, IE 11, MS Edge, with > no changes. > > Tried to click in the rows, scrolling etc. with no success. > > Someone has found the same issue ? > Thanks in advance > > -- > Roberto Nunin > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 65845 bytes Desc: not available URL: From farkey_2000 at yahoo.com Fri Feb 2 12:01:24 2018 From: farkey_2000 at yahoo.com (Andy) Date: Fri, 2 Feb 2018 12:01:24 +0000 (UTC) Subject: [ovirt-users] Unable to deploy hosted engine to other hosts In-Reply-To: References: <2142307635.1745246.1517535111701.ref@mail.yahoo.com> <2142307635.1745246.1517535111701@mail.yahoo.com> Message-ID: <334568777.1908888.1517572884997@mail.yahoo.com> Sorry bad habit.....community!!! On Friday, February 2, 2018, 4:11:50 AM EST, Sandro Bonazzola wrote: 2018-02-02 2:31 GMT+01:00 Andy : Support, Hi Andy, just a note this is not Support, this is Community :-) ? I am having a problem with redeploying the hosted engine to the second and third host in the cluster.? This setup is from a clean install of 4.2 and all the hosts are up and functional in the engine.? When I try to deploy the engine the install fails.? Checking the VDSM service I get: vdsm[4752]: WARN Worker blocked: at 0x2807590> timeout=15, duration=270 at 0x1e0bc90> task#=186 at 0x27f0590> The setup is a three host ovirt 4.2 running CentOS 7.4, with three gluster volumes (engine and two data).? I have attached the agent, broker, ovirt setup, and vdsm logs.? Any help is appreciated. Simone already replied, adding also Sahina to the loop.? Thanks ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users -- SANDRO?BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat?EMEA | | TRIED. TESTED. TRUSTED. | -------------- next part -------------- An HTML attachment was scrubbed... URL: From farkey_2000 at yahoo.com Fri Feb 2 12:04:55 2018 From: farkey_2000 at yahoo.com (Andy) Date: Fri, 2 Feb 2018 12:04:55 +0000 (UTC) Subject: [ovirt-users] Unable to deploy hosted engine to other hosts In-Reply-To: References: <2142307635.1745246.1517535111701.ref@mail.yahoo.com> <2142307635.1745246.1517535111701@mail.yahoo.com> Message-ID: <1460118343.1957528.1517573095183@mail.yahoo.com> Simone, thanks for the help.? Attached at the rquested logs.? One thing to note I have three networks, one for gluster, migration, and storage (201, 200, and 199).?? thanks On Friday, February 2, 2018, 4:06:11 AM EST, Simone Tiraboschi wrote: Hi Andy,can you please attach engine.log and host-deploy logs from the engine VM? On Fri, Feb 2, 2018 at 2:31 AM, Andy wrote: Support, I am having a problem with redeploying the hosted engine to the second and third host in the cluster.? This setup is from a clean install of 4.2 and all the hosts are up and functional in the engine.? When I try to deploy the engine the install fails.? Checking the VDSM service I get: vdsm[4752]: WARN Worker blocked: at 0x2807590> timeout=15, duration=270 at 0x1e0bc90> task#=186 at 0x27f0590> The setup is a three host ovirt 4.2 running CentOS 7.4, with three gluster volumes (engine and two data).? I have attached the agent, broker, ovirt setup, and vdsm logs.? Any help is appreciated. Thanks ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: engine-logs.tar.gz Type: application/gzip Size: 100883 bytes Desc: not available URL: From f.thommen at dkfz-heidelberg.de Fri Feb 2 11:29:28 2018 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Fri, 2 Feb 2018 12:29:28 +0100 Subject: [ovirt-users] oVirt Upgrade 4.1 -> 4.2 fails with YUM dependency problems (CentOS) Message-ID: <8e1c50de-23d8-28c7-c55a-fcef35020d00@dkfz-heidelberg.de> Hi, following the oVirt upgrade procedure on a CentOS system (https://www.ovirt.org/release/4.2.0/#centos--rhel) I fail at step `yum update "ovirt-*-setup*"` with a YUM dependency problem: $ yum update "ovirt-*-setup*" [...] ovirt-engine-setup-plugin-ovirt-engine conflicts with ovirt-engine-4.1.3.5-1.el7.centos.noarch [...] $ For the complete yum update output see below. I also noticed, that after having installed ovirt-release42, the old oVirt 4.1 repos were still present on the system. I disabled them manually to no avail. `yum update ovirt-engine` results in "No packages marked for update". Does anyone know, how we can fix this problem? Cheers frank ----- complete yum update output ----- $ yum update "ovirt-*-setup*" Loaded plugins: fastestmirror, versionlock Loading mirror speeds from cached hostfile * base: centos.alpha-labs.net * extras: centos.mirror.iphh.net * ovirt-4.2: ftp.nluug.nl * ovirt-4.2-epel: epel.besthosting.ua * updates: centos.mirror.iphh.net Resolving Dependencies --> Running transaction check ---> Package ovirt-engine-dwh-setup.noarch 0:4.1.9-1.el7.centos will be updated ---> Package ovirt-engine-dwh-setup.noarch 0:4.2.1-1.el7.centos will be an update --> Processing Dependency: rh-postgresql95-postgresql-server for package: ovirt-engine-dwh-setup-4.2.1-1.el7.centos.noarch ---> Package ovirt-engine-setup.noarch 0:4.1.9.1-1.el7.centos will be updated ---> Package ovirt-engine-setup.noarch 0:4.2.0.2-1.el7.centos will be an update ---> Package ovirt-engine-setup-base.noarch 0:4.1.9.1-1.el7.centos will be updated ---> Package ovirt-engine-setup-base.noarch 0:4.2.0.2-1.el7.centos will be an update --> Processing Dependency: otopi >= 1.7.1 for package: ovirt-engine-setup-base-4.2.0.2-1.el7.centos.noarch --> Processing Dependency: ovirt-engine-lib >= 4.2.0.2-1.el7.centos for package: ovirt-engine-setup-base-4.2.0.2-1.el7.centos.noarch ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch 0:4.1.9.1-1.el7.centos will be updated ---> Package ovirt-engine-setup-plugin-ovirt-engine.noarch 0:4.2.0.2-1.el7.centos will be an update --> Processing Dependency: openvswitch-ovn-central >= 2.7 for package: ovirt-engine-setup-plugin-ovirt-engine-4.2.0.2-1.el7.centos.noarch --> Processing Dependency: ovirt-provider-ovn for package: ovirt-engine-setup-plugin-ovirt-engine-4.2.0.2-1.el7.centos.noarch ---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch 0:4.1.9.1-1.el7.centos will be updated ---> Package ovirt-engine-setup-plugin-ovirt-engine-common.noarch 0:4.2.0.2-1.el7.centos will be an update ---> Package ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch 0:4.1.9.1-1.el7.centos will be updated ---> Package ovirt-engine-setup-plugin-vmconsole-proxy-helper.noarch 0:4.2.0.2-1.el7.centos will be an update ---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch 0:4.1.9.1-1.el7.centos will be updated ---> Package ovirt-engine-setup-plugin-websocket-proxy.noarch 0:4.2.0.2-1.el7.centos will be an update ---> Package ovirt-imageio-proxy-setup.noarch 0:1.0.0-0.201701151456.git89ae3b4.el7.centos will be updated ---> Package ovirt-imageio-proxy-setup.noarch 0:1.2.0-1.el7.centos will be an update --> Running transaction check ---> Package openvswitch-ovn-central.x86_64 1:2.7.3-1.1fc27.el7 will be installed --> Processing Dependency: openvswitch-ovn-common for package: 1:openvswitch-ovn-central-2.7.3-1.1fc27.el7.x86_64 --> Processing Dependency: openvswitch for package: 1:openvswitch-ovn-central-2.7.3-1.1fc27.el7.x86_64 ---> Package otopi.noarch 0:1.6.3-1.el7.centos will be updated --> Processing Dependency: otopi = 1.6.3-1.el7.centos for package: otopi-java-1.6.3-1.el7.centos.noarch ---> Package otopi.noarch 0:1.7.5-1.el7.centos will be an update ---> Package ovirt-engine-lib.noarch 0:4.1.9.1-1.el7.centos will be updated ---> Package ovirt-engine-lib.noarch 0:4.2.0.2-1.el7.centos will be an update ---> Package ovirt-provider-ovn.noarch 0:1.2.2-1.el7.centos will be installed --> Processing Dependency: python-openvswitch >= 2.7 for package: ovirt-provider-ovn-1.2.2-1.el7.centos.noarch --> Processing Dependency: python2-ovsdbapp for package: ovirt-provider-ovn-1.2.2-1.el7.centos.noarch ---> Package rh-postgresql95-postgresql-server.x86_64 0:9.5.9-1.el7 will be installed --> Processing Dependency: rh-postgresql95-postgresql-libs(x86-64) = 9.5.9-1.el7 for package: rh-postgresql95-postgresql-server-9.5.9-1.el7.x86_64 --> Processing Dependency: rh-postgresql95-postgresql(x86-64) = 9.5.9-1.el7 for package: rh-postgresql95-postgresql-server-9.5.9-1.el7.x86_64 --> Processing Dependency: rh-postgresql95-runtime for package: rh-postgresql95-postgresql-server-9.5.9-1.el7.x86_64 --> Processing Dependency: /usr/bin/scl_source for package: rh-postgresql95-postgresql-server-9.5.9-1.el7.x86_64 --> Processing Dependency: libpq.so.rh-postgresql95-5()(64bit) for package: rh-postgresql95-postgresql-server-9.5.9-1.el7.x86_64 --> Running transaction check ---> Package openvswitch.x86_64 1:2.7.3-1.1fc27.el7 will be installed ---> Package openvswitch-ovn-common.x86_64 1:2.7.3-1.1fc27.el7 will be installed ---> Package otopi-java.noarch 0:1.6.3-1.el7.centos will be updated ---> Package otopi-java.noarch 0:1.7.5-1.el7.centos will be an update ---> Package python2-openvswitch.noarch 1:2.7.3-1.1fc27.el7 will be installed ---> Package python2-ovsdbapp.noarch 0:0.6.0-1.el7 will be installed --> Processing Dependency: python-pbr >= 2.0.0 for package: python2-ovsdbapp-0.6.0-1.el7.noarch --> Processing Dependency: python-netaddr for package: python2-ovsdbapp-0.6.0-1.el7.noarch --> Processing Dependency: python-fixtures for package: python2-ovsdbapp-0.6.0-1.el7.noarch ---> Package rh-postgresql95-postgresql.x86_64 0:9.5.9-1.el7 will be installed ---> Package rh-postgresql95-postgresql-libs.x86_64 0:9.5.9-1.el7 will be installed ---> Package rh-postgresql95-runtime.x86_64 0:2.2-2.el7 will be installed ---> Package scl-utils.x86_64 0:20130529-18.el7_4 will be installed --> Running transaction check ---> Package python-fixtures.noarch 0:3.0.0-2.el7 will be installed --> Processing Dependency: python-testtools >= 0.9.22 for package: python-fixtures-3.0.0-2.el7.noarch ---> Package python-netaddr.noarch 0:0.7.5-7.el7 will be installed ---> Package python2-pbr.noarch 0:3.1.1-1.el7 will be installed --> Running transaction check ---> Package python-testtools.noarch 0:1.8.0-2.el7 will be installed --> Processing Dependency: python-unittest2 >= 0.8.0 for package: python-testtools-1.8.0-2.el7.noarch --> Processing Dependency: python-mimeparse for package: python-testtools-1.8.0-2.el7.noarch --> Processing Dependency: python-extras for package: python-testtools-1.8.0-2.el7.noarch --> Running transaction check ---> Package python-extras.noarch 0:0.0.3-2.el7 will be installed ---> Package python-mimeparse.noarch 0:0.1.4-1.el7 will be installed ---> Package python-unittest2.noarch 0:1.0.1-1.el7 will be installed --> Processing Dependency: python-traceback2 for package: python-unittest2-1.0.1-1.el7.noarch --> Running transaction check ---> Package python-traceback2.noarch 0:1.4.0-2.el7 will be installed --> Processing Dependency: python-linecache2 for package: python-traceback2-1.4.0-2.el7.noarch --> Running transaction check ---> Package python-linecache2.noarch 0:1.0.0-1.el7 will be installed --> Processing Conflict: ovirt-engine-setup-plugin-ovirt-engine-4.2.0.2-1.el7.centos.noarch conflicts ovirt-engine < 4.1.7 --> Finished Dependency Resolution Error: ovirt-engine-setup-plugin-ovirt-engine conflicts with ovirt-engine-4.1.3.5-1.el7.centos.noarch You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest $ From ykaul at redhat.com Fri Feb 2 12:18:13 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 2 Feb 2018 14:18:13 +0200 Subject: [ovirt-users] GUI trouble when adding FC datadomain In-Reply-To: References: Message-ID: On Feb 2, 2018 1:09 PM, "Roberto Nunin" wrote: Hi Yaniv Currently Engine is 4.2.0.2-1 on CentOS7.4 I've used using oVirt Node image 4.2-2017122007.iso LUN I need is certainly empty. (the second one in the list). Please file a bug with logs, so we can understand the issue better. Y. 2018-02-02 13:01 GMT+01:00 Yaniv Kaul : > Which version are you using? Are you sure the LUNs are empty? > Y. > > > On Feb 2, 2018 11:19 AM, "Roberto Nunin" wrote: > >> Hi all >> >> I'm trying to setup ad HE cluster, with FC domain. >> HE is also on FC. >> >> When I try to add the first domain in the datacenter, I've this form: >> >> [image: Immagine incorporata 1] >> >> So I'm not able to choose any of the three volumes currently masked >> towards the chosen host. >> I've tried all browser I've: Firefox 58, Chrome 63, IE 11, MS Edge, with >> no changes. >> >> Tried to click in the rows, scrolling etc. with no success. >> >> Someone has found the same issue ? >> Thanks in advance >> >> -- >> Roberto Nunin >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> -- Roberto Nunin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 65845 bytes Desc: not available URL: From stirabos at redhat.com Fri Feb 2 12:21:16 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 2 Feb 2018 13:21:16 +0100 Subject: [ovirt-users] Unable to deploy hosted engine to other hosts In-Reply-To: <1460118343.1957528.1517573095183@mail.yahoo.com> References: <2142307635.1745246.1517535111701.ref@mail.yahoo.com> <2142307635.1745246.1517535111701@mail.yahoo.com> <1460118343.1957528.1517573095183@mail.yahoo.com> Message-ID: On Fri, Feb 2, 2018 at 1:04 PM, Andy wrote: > Simone, > > thanks for the help. Attached at the rquested logs. One thing to note I > have three networks, one for gluster, migration, and storage (201, 200, and > 199). > > thanks > host-deploy failed for this: 2018-01-31 00:52:48,803-0500 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE setup METHOD otopi.plugins.otopi.packagers.dnfpackager.Plugin._setup (None) 2018-01-31 00:52:48,805-0500 DEBUG otopi.context context._executeMethod:128 Stage setup METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._setup 2018-01-31 00:52:48,805-0500 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventStart STAGE setup METHOD otopi.plugins.otopi.packagers.yumpackager.Plugin._setup (None) 2018-01-31 00:52:48,810-0500 DEBUG otopi.context context._executeMethod:143 method exception Traceback (most recent call last): File "/tmp/ovirt-yTGUAVifQ3/pythonlib/otopi/context.py", line 133, in _executeMethod method['method']() File "/tmp/ovirt-yTGUAVifQ3/otopi-plugins/otopi/packagers/yumpackager.py", line 219, in _setup with self._miniyum.transaction(): File "/tmp/ovirt-yTGUAVifQ3/pythonlib/otopi/miniyum.py", line 336, in __enter__ self._managed.beginTransaction() File "/tmp/ovirt-yTGUAVifQ3/pythonlib/otopi/miniyum.py", line 720, in beginTransaction self._yb.doLock() File "/usr/lib/python2.7/site-packages/yum/__init__.py", line 2234, in doLock raise Errors.LockError(0, msg, oldpid) LockError: Existing lock /var/run/yum.pid: another copy is running as pid 23746. 2018-01-31 00:52:48,817-0500 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Environment setup': Existing lock /var/run/yum.pid: another copy is running as pid 23746. I'd suggest to check what is holding yum lock and simply retry. > > > On Friday, February 2, 2018, 4:06:11 AM EST, Simone Tiraboschi < > stirabos at redhat.com> wrote: > > > Hi Andy, > can you please attach engine.log and host-deploy logs from the engine VM? > > On Fri, Feb 2, 2018 at 2:31 AM, Andy wrote: > > Support, > > I am having a problem with redeploying the hosted engine to the second and > third host in the cluster. This setup is from a clean install of 4.2 and > all the hosts are up and functional in the engine. When I try to deploy > the engine the install fails. > > Checking the VDSM service I get: vdsm[4752]: WARN Worker blocked: name=periodic/3 running HostMonitor object at 0x2807550> at 0x2807590> timeout=15, duration=270 at > 0x1e0bc90> task#=186 at 0x27f0590> > > The setup is a three host ovirt 4.2 running CentOS 7.4, with three gluster > volumes (engine and two data). I have attached the agent, broker, ovirt > setup, and vdsm logs. > > Any help is appreciated. > > Thanks > > ______________________________ _________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/ mailman/listinfo/users > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From statsenko_ky at interrao.ru Fri Feb 2 13:17:21 2018 From: statsenko_ky at interrao.ru (=?utf-8?B?0KHRgtCw0YbQtdC90LrQviDQmtC+0L3RgdGC0LDQvdGC0LjQvSDQrtGA0Yw=?= =?utf-8?B?0LXQstC40Yc=?=) Date: Fri, 2 Feb 2018 13:17:21 +0000 Subject: [ovirt-users] oVirt 4.1.9 quota problem Message-ID: Hello! We discovered some quota calculation error after oVirt was upgraded to 4.1.9. Storage quota calculates incorrectly now. See attached screenshot. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt quota error.png Type: image/png Size: 18142 bytes Desc: ovirt quota error.png URL: From akrejcir at redhat.com Fri Feb 2 14:05:32 2018 From: akrejcir at redhat.com (Andrej Krejcir) Date: Fri, 2 Feb 2018 15:05:32 +0100 Subject: [ovirt-users] oVirt 4.1.9 quota problem In-Reply-To: References: Message-ID: Hi, This looks like a bug. Please open one here: https://bugzilla.redhat.com/enter_bug.cgi?product=ovirt-engine And attach some information about the storage quota usage when this happens, so it can be reproduced. Andrej On 2 February 2018 at 14:17, ???????? ?????????? ??????? < statsenko_ky at interrao.ru> wrote: > Hello! > > We discovered some quota calculation error after oVirt was upgraded to > 4.1.9. > > Storage quota calculates incorrectly now. > > See attached screenshot. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Fri Feb 2 14:10:14 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Fri, 2 Feb 2018 15:10:14 +0100 Subject: [ovirt-users] Using upstream QEMU In-Reply-To: <039252F9-F2C8-45B0-A55E-2AE5E70AE02B@codex.online> References: <039252F9-F2C8-45B0-A55E-2AE5E70AE02B@codex.online> Message-ID: <06246296-B97D-4371-A2C4-3F60D767D5FD@redhat.com> > On 1 Feb 2018, at 13:43, Harry Mallon wrote: > > Apologies for the formatting in the following message, I can't get Office for Mac to play ball... > > > Harry Mallon > CODEX | Senior Software Engineer > 60 Poland Street | London | England | W1F 7NT > E harry.mallon at codex.online | T +44 203 7000 989 > On 01/02/2018, 09:28, "Michal Skrivanek" > wrote: >> On 31 Jan 2018, at 16:53, Yedidyah Bar David > wrote: >> >> On Wed, Jan 31, 2018 at 5:43 PM, Harry Mallon > wrote: >>> Hello all, >>> >>> Has anyone used oVirt with non-oVirt provided QEMU versions? >>> I need a feature provided by upstream QEMU, but it is disabled in the oVirt/CentOS7 QEMU RPM. > > just curious - which one? > Typically the reason for disabling it is that it?s not really stable > > I am trying to run OSX guests on a host (Apple hardware). The "applesmc" device is part of that puzzle and is disabled in the Red Hat QEMU. > >>> >>> I have two possible methods to avoid the issue: >>> 1. Fedora has a more recent QEMU which is closer to 'stock'. I see that oVirt 4.2 has no Fedora support, >> >> Indeed, mostly >> >>> but is it possible to install the host onto a Fedora machine? >> >> Didn't try this recently, but it might require not-too-much work with >> fc25 or so. >> IIRC fc27 is python3-only, and this will require more work (which is >> ongoing, but >> don't hold your breath). >> >>> I am trying to use the master branch rpms as recommended in the "No Fedora Support" note with no luck currently. >> >> Another option is to try to rebuild the fedora srpm for CentOS 7. >> >>> 2. Is it safe/sensible to use oVirt with a CentOS7 host running an upstream QEMU version? >> >> No idea. If it's only for development/testing, I'd say give it a try. > > it?s as sensible as running any bleeding edge stuff. It should work if you manage to resolve qemu deps > > you could also rebuild qemu-kvm-ev without the patch which blacklists the feature you want to see. > > I was able to patch and rebuild qemu-kvm-ev, but I think I have hit more problems using the patched firmware from here: http://www.contrib.andrew.cmu.edu/~somlo/OSXKVM/ . Hopefully that gets me a little closer though. Getting these mac VMs to work has so far been a huge pain. I did try that couple years ago, it did work eventually, but I wasn?t able to get accelerated graphics and without that it was kind of useless > > Thanks, > michal >> >>> >>> Thanks, >>> Harry >>> >>> >>> Harry Mallon >>> CODEX | Senior Software Engineer >>> 60 Poland Street | London | England | W1F 7NT >>> E harry.mallon at codex.online | T +44 203 7000 989 >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> -- >> Didi >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > Harry -------------- next part -------------- An HTML attachment was scrubbed... URL: From chashock at speakfree.net Fri Feb 2 16:33:12 2018 From: chashock at speakfree.net (Chas Hockenbarger) Date: Fri, 02 Feb 2018 10:33:12 -0600 Subject: [ovirt-users] oVirt Upgrade 4.1 -> 4.2 fails with YUM dependency problems (CentOS) In-Reply-To: <8e1c50de-23d8-28c7-c55a-fcef35020d00@dkfz-heidelberg.de> Message-ID: An HTML attachment was scrubbed... URL: From stirabos at redhat.com Fri Feb 2 17:00:36 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 2 Feb 2018 18:00:36 +0100 Subject: [ovirt-users] After realizing HA migration, the virtual machine can still get the virtual machine information by using the "vdsm-client host getVMList" instruction on the host before the migration. In-Reply-To: <6c18742a.bb33.1615142da93.Coremail.pym0914@163.com> References: <22ca0fd1.aee0.1614c0a03df.Coremail.pym0914@163.com> <42d98cc1.163e.1614ef43bd6.Coremail.pym0914@163.com> <6c18742a.bb33.1615142da93.Coremail.pym0914@163.com> Message-ID: On Thu, Feb 1, 2018 at 1:06 PM, Pym wrote: > The environment on my side may be different from the link. My VM1 can be > used normally after it is started on host2, but there is still information > left on host1 that is not cleaned up. > > Only the interface and background can still get the information of vm1 on > host1, but the vm2 has been successfully started on host2, with the HA > function. > > I would like to ask a question, whether the UUID of the virtual machine is > stored in the database or where is it maintained? Is it not successfully > deleted after using the HA function? > > I just encounter a similar behavior: after a reboot of the host 'vdsm-client Host getVMFullList' is still reporting an old VM that is not visible with 'virsh -r list --all'. I filed a bug to track it: https://bugzilla.redhat.com/show_bug.cgi?id=1541479 > > > > > 2018-02-01 16:12:16?"Simone Tiraboschi" ? > > > > On Thu, Feb 1, 2018 at 2:21 AM, Pym wrote: > >> >> I checked the vm1, he is keep up state, and can be used, but on host1 has >> after shutdown is a suspended vm1, this cannot be used, this is the problem >> now. >> >> In host1, you can get the information of vm1 using the "vdsm-client Host >> getVMList", but you can't get the vm1 information using the "virsh list". >> >> > Maybe a side effect of https://bugzilla.redhat.com/show_bug.cgi?id=1505399 > > > Arik? > > > >> >> >> >> 2018-02-01 07:16:37?"Simone Tiraboschi" ? >> >> >> >> On Wed, Jan 31, 2018 at 12:46 PM, Pym wrote: >> >>> Hi: >>> >>> The current environment is as follows: >>> >>> Ovirt-engine version 4.2.0 is the source code compilation and >>> installation. Add two hosts, host1 and host2, respectively. At host1, a >>> virtual machine is created on vm1, and a vm2 is created on host2 and HA is >>> configured. >>> >>> Operation steps: >>> >>> Use the shutdown -r command on host1. Vm1 successfully migrated to host2. >>> When host1 is restarted, the following situation occurs: >>> >>> The state of the vm2 will be shown in two images, switching from up and >>> pause. >>> >>> When I perform the "vdsm-client Host getVMList" in host1, I will get the >>> information of vm1. When I execute the "vdsm-client Host getVMList" in >>> host2, I will get the information of vm1 and vm2. >>> When I do "virsh list" in host1, there is no virtual machine >>> information. When I execute "virsh list" at host2, I will get information >>> of vm1 and vm2. >>> >>> How to solve this problem? >>> >>> Is it the case that vm1 did not remove the information on host1 during >>> the migration, or any other reason? >>> >> >> Did you also check if your vms always remained up? >> In 4.2 we have libvirt-guests service on the hosts which tries to >> properly shutdown the running VMs on host shutdown. >> >> >>> >>> Thank you. >>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> >> > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 35900 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34515 bytes Desc: not available URL: From tadavis at lbl.gov Fri Feb 2 20:10:16 2018 From: tadavis at lbl.gov (Thomas Davis) Date: Fri, 2 Feb 2018 12:10:16 -0800 Subject: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node.. Message-ID: Is this supported? I have a node, that centos 7.4 minimal is installed on, with an interface setup for an IP address. I've yum installed nothing else except the ovirt-4.2.1-pre rpm, run screen, and then do the 'hosted-engine --deploy' command. It hangs on: [ INFO ] changed: [localhost] [ INFO ] TASK [Get ovirtmgmt route table id] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta": "0:00:00.004845", "end": "2018-02-02 12:03:30.794860", "rc": 0, "start": "2018-02-02 12:03:30.790015", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up [ INFO ] Cleaning temporary resources [ INFO ] TASK [Gathering Facts] [ INFO ] ok: [localhost] [ INFO ] TASK [Remove local vm dir] [ INFO ] ok: [localhost] [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180202120333.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine deployment failed: please check the logs for the issue, fix accordingly or re-deploy from scratch. Log file is located at /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180202115038-r11nh1.log but the VM is up and running, just attached to the 192.168.122.0/24 subnet [root at d8-r13-c2-n1 ~]# ssh root at 192.168.122.37 root at 192.168.122.37's password: Last login: Fri Feb 2 11:54:47 2018 from 192.168.122.1 [root at ovirt ~]# systemctl status ovirt-engine ? ovirt-engine.service - oVirt Engine Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-02-02 11:54:42 PST; 11min ago Main PID: 24724 (ovirt-engine.py) CGroup: /system.slice/ovirt-engine.service ??24724 /usr/bin/python /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py --redirect-output --systemd=notify start ??24856 ovirt-engine -server -XX:+TieredCompilation -Xms3971M -Xmx3971M -Djava.awt.headless=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse... Feb 02 11:54:41 ovirt.crt.nersc.gov systemd[1]: Starting oVirt Engine... Feb 02 11:54:41 ovirt.crt.nersc.gov ovirt-engine.py[24724]: 2018-02-02 11:54:41,767-0800 ovirt-engine: INFO _detectJBossVersion:187 Detecting JBoss version. Running: /usr/lib/jvm/jre/...600000', '- Feb 02 11:54:42 ovirt.crt.nersc.gov ovirt-engine.py[24724]: 2018-02-02 11:54:42,394-0800 ovirt-engine: INFO _detectJBossVersion:207 Return code: 0, | stdout: '[u'WildFly Full 11.0.0....tderr: '[]' Feb 02 11:54:42 ovirt.crt.nersc.gov systemd[1]: Started oVirt Engine. Feb 02 11:55:25 ovirt.crt.nersc.gov python2[25640]: ansible-stat Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True Feb 02 11:55:29 ovirt.crt.nersc.gov python2[25698]: ansible-stat Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True Feb 02 11:55:30 ovirt.crt.nersc.gov python2[25741]: ansible-stat Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True Feb 02 11:55:30 ovirt.crt.nersc.gov python2[25767]: ansible-stat Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True Feb 02 11:55:31 ovirt.crt.nersc.gov python2[25795]: ansible-stat Invoked with checksum_algorithm=sha1 get_checksum=True follow=False path=/etc/ovirt-engine-metrics/config.yml get_md5...ributes=True The 'ip rule list' never has an ovirtmgmt rule/table in it.. which means the ansible script loops then dies; vdsmd has never configured the network on the node. [root at d8-r13-c2-n1 ~]# systemctl status vdsmd -l ? vdsmd.service - Virtual Desktop Server Manager Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2018-02-02 11:55:11 PST; 14min ago Main PID: 7654 (vdsmd) CGroup: /system.slice/vdsmd.service ??7654 /usr/bin/python2 /usr/share/vdsm/vdsmd Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running dummybr Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running tune_system Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running test_space Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running test_lo Feb 02 11:55:11 d8-r13-c2-n1 systemd[1]: Started Virtual Desktop Server Manager. Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN File: /var/run/vdsm/trackedInterfaces/vnet0 already removed Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN Not ready yet, ignoring event '|virt|VM_status|ba56a114-efb0-45e0-b2ad-808805ae93e0' args={'ba56a114-efb0-45e0-b2ad-808805ae93e0': {'status': 'Powering up', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '127.0.0.1', 'type': 'vnc', 'port': '5900'}], 'hash': '5328187475809024041', 'cpuUser': '0.00', 'monitorResponse': '0', 'elapsedTime': '0', 'cpuSys': '0.00', 'vcpuPeriod': 100000L, 'timeOffset': '0', 'clientIp': '', 'pauseCode': 'NOERR', 'vcpuQuota': '-1'}} Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available. Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available, KSM stats will be missing. Feb 02 11:55:17 d8-r13-c2-n1 vdsm[7654]: WARN ping was deprecated in favor of ping2 and confirmConnectivity Do I need to install a complete ovirt-engine on the node first, bring the node into ovirt, then bring up hosted-engine? I'd like to avoid this and just go straight to hosted-engine setup. thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.thommen at dkfz-heidelberg.de Fri Feb 2 20:29:33 2018 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Fri, 2 Feb 2018 21:29:33 +0100 Subject: [ovirt-users] oVirt Upgrade 4.1 -> 4.2 fails with YUM dependency problems (CentOS) In-Reply-To: References: Message-ID: <72845a11-5a72-8a5c-ba2f-a71bbc3c9b66@dkfz-heidelberg.de> On 02/02/18 17:33, Chas Hockenbarger wrote: > I haven't tried this yet, but looking at the detailed error, the > implication is that your current install is less than 4.1.7, which is > where the conflict is. Have you tried updating to > 4.1.7 before upgrading? I had tried with `yum upgrade` to no avail and just realized, that this has to be done with ovirt-engine commands. I'll do this after the weekend. frank From maozza at gmail.com Sat Feb 3 07:06:22 2018 From: maozza at gmail.com (maoz zadok) Date: Sat, 3 Feb 2018 09:06:22 +0200 Subject: [ovirt-users] NetworkManager with oVirt version 4.2.0 Message-ID: Hello All, I'm new to oVirt, I'm trying with no success to set up the networking on an oVirt 4.2.0 node, and I think I'm missing something. background: interfaces em1-4 is bonded to bond0 VLAN configured on bond0.1 and bridged to ovirtmgmt for the management interface. I'm not sure its updated to version 4.2.0 but I followed this post: https://www.ovirt.org/documentation/how-to/networking/bonding-vlan-bridge/ with this setting, the NetworkManager keep starting up on reboot, and the interfaces are not managed by oVirt (and the nice traffic graphs are not shown). my question: Is NetworkManager need to be disabled as in the above post? Do I need to manage the networking using (nmtui) NetworkManager? Thanks! Maoz -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Sat Feb 3 10:31:38 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sat, 3 Feb 2018 02:31:38 -0800 Subject: [ovirt-users] Windows 10 floppy not found during setup Message-ID: I am trying to install Windows 10 on Ovirt 4.2 I pushed the virtio-win iso and vfd files to my ISO domain. I created the VM and loaded the CD rom with the windows install ISO and in the "run once" settings I chose the virtio-win selection as a floppy. However when entering windows setup, there is no drive to install to, and no floppy shows up in the usual place to install the drivers. the installation did succeed when selecting "IDE" as the target disk instead of the virtio or virtio-SCSI options. Is there any way to install the drivers after the fact and then change the disk mode from IDE to virtio? Is the method on this page https://www.ovirt.org/documentation/how-to/virtual-machines/create-a-windows-7-virtual-machine/ still relevant? Looks like they suggest swapping the CD disk once the windows setup has begun. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From reznikov_aa at soskol.com Sat Feb 3 11:41:14 2018 From: reznikov_aa at soskol.com (reznikov_aa at soskol.com) Date: Sat, 03 Feb 2018 14:41:14 +0300 Subject: [ovirt-users] Change emulatedMachine in vm.conf. Hosted Engine not boot properly. Message-ID: Greetings, friends! I have problems start HostedEngine after upgrading Ovirt from version 3.6 to 4.0. VM starts, but boot is not continue properly. If I are connected via vnc, i can see smbios?, machine id?, and nothing more. I was able to start HE vm by changing the type of emulatedMachine from ?pc? to ?rhel6.5.0? in file vm.conf, and start with --vm-conf=myvm.conf? How can I change the vm.conf in OVF store? The solution described here https://access.redhat.com/solutions/2209751, not help me, or maybe there are any other solutions to this problem? My test ovirt lab is vmware workstation. vdsm-4.18.21-1.el7.centos.x86_64 libvirt-libs-3.2.0-14.el7_4.7.x86_64 ovirt-engine-4.0.6.3-1.el7.centos.noarch Thanx.Alex. From rightkicktech at gmail.com Sat Feb 3 14:23:10 2018 From: rightkicktech at gmail.com (Alex K) Date: Sat, 3 Feb 2018 16:23:10 +0200 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: Hi All, I have reproduced the backups failure. The VM that failed is named Win-FileServer and is a Windows 2016 server 64bit with 300GB of disk. During the cloning step the VM went unresponsive and I had to stop/start it. I am attaching the logs.I have another VM with same OS (named DC-Server within the logs) but with smaller disk (60GB) which does not give any error when it is cloned. I see a line: EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VDSM v2.sitedomain command SnapshotVDS failed: Message timeout which can be caused by communication issues I appreciate any advise why I am facing such issue with the backups. thanx, Alex On Tue, Jan 30, 2018 at 12:49 AM, Alex K wrote: > Ok. I will reproduce and collect logs. > > Thanx, > Alex > > On Jan 29, 2018 20:21, "Mahdi Adnan" wrote: > > I have Windows VMs, both client and server. > if you provide the engine.log file we might have a look at it. > > > -- > > Respectfully > *Mahdi A. Mahdi* > > ------------------------------ > *From:* Alex K > *Sent:* Monday, January 29, 2018 5:40 PM > *To:* Mahdi Adnan > *Cc:* users > *Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM > > Hi, > > I have observed this logged at host when the issue occurs: > > VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer > > or > > VDSM host.domain command GetStatsVDS failed: Connection reset by peer > > At engine logs have not been able to correlate. > > Are you hosting Windows 2016 server and Windows 10 VMs? > The weird is that I have same setup on other clusters with no issues. > > Thanx, > Alex > > On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan > wrote: > > Hi, > > We have a cluster of 17 nodes, backed by GlusterFS storage, and using this > same script for backup. > we have no issues with it so far. > have you checked engine log file ? > > > -- > > Respectfully > *Mahdi A. Mahdi* > > ------------------------------ > *From:* users-bounces at ovirt.org on behalf of > Alex K > *Sent:* Wednesday, January 24, 2018 4:18 PM > *To:* users > *Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM > > Hi all, > > I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on > top glusterfs. > On some VMs (especially one Windows server 2016 64bit with 500 GB of > disk). Guest agents are installed at VMs. i almost always observe that > during the backup of the VM the VM is rendered unresponsive (dashboard > shows a question mark at the VM status and VM does not respond to ping or > to anything). > > For scheduled backups I use: > > https://github.com/wefixit-AT/oVirtBackup > > The script does the following: > > 1. snapshot VM (this is done ok without any failure) > > 2. Clone snapshot (this steps renders the VM unresponsive) > > 3. Export Clone > > 4. Delete clone > > 5. Delete snapshot > > > Do you have any similar experience? Any suggestions to address this? > > I have never seen such issue with hosted Linux VMs. > > The cluster has enough storage to accommodate the clone. > > > Thanx, > > Alex > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: engine.log Type: application/octet-stream Size: 662980 bytes Desc: not available URL: From ykaul at redhat.com Sat Feb 3 15:20:23 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sat, 3 Feb 2018 17:20:23 +0200 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: On Feb 3, 2018 3:24 PM, "Alex K" wrote: Hi All, I have reproduced the backups failure. The VM that failed is named Win-FileServer and is a Windows 2016 server 64bit with 300GB of disk. During the cloning step the VM went unresponsive and I had to stop/start it. I am attaching the logs.I have another VM with same OS (named DC-Server within the logs) but with smaller disk (60GB) which does not give any error when it is cloned. I see a line: EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VDSM v2.sitedomain command SnapshotVDS failed: Message timeout which can be caused by communication issues I suggest adding relevant vdsm.log as well. Y. I appreciate any advise why I am facing such issue with the backups. thanx, Alex On Tue, Jan 30, 2018 at 12:49 AM, Alex K wrote: > Ok. I will reproduce and collect logs. > > Thanx, > Alex > > On Jan 29, 2018 20:21, "Mahdi Adnan" wrote: > > I have Windows VMs, both client and server. > if you provide the engine.log file we might have a look at it. > > > -- > > Respectfully > *Mahdi A. Mahdi* > > ------------------------------ > *From:* Alex K > *Sent:* Monday, January 29, 2018 5:40 PM > *To:* Mahdi Adnan > *Cc:* users > *Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM > > Hi, > > I have observed this logged at host when the issue occurs: > > VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer > > or > > VDSM host.domain command GetStatsVDS failed: Connection reset by peer > > At engine logs have not been able to correlate. > > Are you hosting Windows 2016 server and Windows 10 VMs? > The weird is that I have same setup on other clusters with no issues. > > Thanx, > Alex > > On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan > wrote: > > Hi, > > We have a cluster of 17 nodes, backed by GlusterFS storage, and using this > same script for backup. > we have no issues with it so far. > have you checked engine log file ? > > > -- > > Respectfully > *Mahdi A. Mahdi* > > ------------------------------ > *From:* users-bounces at ovirt.org on behalf of > Alex K > *Sent:* Wednesday, January 24, 2018 4:18 PM > *To:* users > *Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM > > Hi all, > > I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on > top glusterfs. > On some VMs (especially one Windows server 2016 64bit with 500 GB of > disk). Guest agents are installed at VMs. i almost always observe that > during the backup of the VM the VM is rendered unresponsive (dashboard > shows a question mark at the VM status and VM does not respond to ping or > to anything). > > For scheduled backups I use: > > https://github.com/wefixit-AT/oVirtBackup > > The script does the following: > > 1. snapshot VM (this is done ok without any failure) > > 2. Clone snapshot (this steps renders the VM unresponsive) > > 3. Export Clone > > 4. Delete clone > > 5. Delete snapshot > > > Do you have any similar experience? Any suggestions to address this? > > I have never seen such issue with hosted Linux VMs. > > The cluster has enough storage to accommodate the clone. > > > Thanx, > > Alex > > > > > > _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sat Feb 3 15:41:58 2018 From: rightkicktech at gmail.com (Alex K) Date: Sat, 3 Feb 2018 17:41:58 +0200 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: Attaching vdm log from host that trigerred the error, where the Vm that was being cloned was running at that time. thanx, Alex On Sat, Feb 3, 2018 at 5:20 PM, Yaniv Kaul wrote: > > > On Feb 3, 2018 3:24 PM, "Alex K" wrote: > > Hi All, > > I have reproduced the backups failure. The VM that failed is named > Win-FileServer and is a Windows 2016 server 64bit with 300GB of disk. > During the cloning step the VM went unresponsive and I had to stop/start > it. > I am attaching the logs.I have another VM with same OS (named DC-Server > within the logs) but with smaller disk (60GB) which does not give any error > when it is cloned. > I see a line: > > EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call > Stack: null, Custom ID: null, Custom Event ID: -1, Message: VDSM > v2.sitedomain command SnapshotVDS failed: Message timeout which can be > caused by communication issues > > > I suggest adding relevant vdsm.log as well. > Y. > > > I appreciate any advise why I am facing such issue with the backups. > > thanx, > Alex > > On Tue, Jan 30, 2018 at 12:49 AM, Alex K wrote: > >> Ok. I will reproduce and collect logs. >> >> Thanx, >> Alex >> >> On Jan 29, 2018 20:21, "Mahdi Adnan" wrote: >> >> I have Windows VMs, both client and server. >> if you provide the engine.log file we might have a look at it. >> >> >> -- >> >> Respectfully >> *Mahdi A. Mahdi* >> >> ------------------------------ >> *From:* Alex K >> *Sent:* Monday, January 29, 2018 5:40 PM >> *To:* Mahdi Adnan >> *Cc:* users >> *Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM >> >> Hi, >> >> I have observed this logged at host when the issue occurs: >> >> VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer >> >> or >> >> VDSM host.domain command GetStatsVDS failed: Connection reset by peer >> >> At engine logs have not been able to correlate. >> >> Are you hosting Windows 2016 server and Windows 10 VMs? >> The weird is that I have same setup on other clusters with no issues. >> >> Thanx, >> Alex >> >> On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan >> wrote: >> >> Hi, >> >> We have a cluster of 17 nodes, backed by GlusterFS storage, and using >> this same script for backup. >> we have no issues with it so far. >> have you checked engine log file ? >> >> >> -- >> >> Respectfully >> *Mahdi A. Mahdi* >> >> ------------------------------ >> *From:* users-bounces at ovirt.org on behalf of >> Alex K >> *Sent:* Wednesday, January 24, 2018 4:18 PM >> *To:* users >> *Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM >> >> Hi all, >> >> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on >> top glusterfs. >> On some VMs (especially one Windows server 2016 64bit with 500 GB of >> disk). Guest agents are installed at VMs. i almost always observe that >> during the backup of the VM the VM is rendered unresponsive (dashboard >> shows a question mark at the VM status and VM does not respond to ping or >> to anything). >> >> For scheduled backups I use: >> >> https://github.com/wefixit-AT/oVirtBackup >> >> The script does the following: >> >> 1. snapshot VM (this is done ok without any failure) >> >> 2. Clone snapshot (this steps renders the VM unresponsive) >> >> 3. Export Clone >> >> 4. Delete clone >> >> 5. Delete snapshot >> >> >> Do you have any similar experience? Any suggestions to address this? >> >> I have never seen such issue with hosted Linux VMs. >> >> The cluster has enough storage to accommodate the clone. >> >> >> Thanx, >> >> Alex >> >> >> >> >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vdsm.log.6 Type: application/octet-stream Size: 112167 bytes Desc: not available URL: From christophe.trefois at uni.lu Sat Feb 3 17:58:57 2018 From: christophe.trefois at uni.lu (Christophe TREFOIS) Date: Sat, 3 Feb 2018 17:58:57 +0000 Subject: [ovirt-users] Windows 10 floppy not found during setup In-Reply-To: References: Message-ID: We had to do the swapping CD trick for Windows 10 yes. Regards, -- Dr Christophe Trefois, Dipl.-Ing. Technical Specialist / Post-Doc UNIVERSIT? DU LUXEMBOURG LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE Campus Belval | House of Biomedicine 6, avenue du Swing L-4367 Belvaux T: +352 46 66 44 6124 F: +352 46 66 44 6949 http://www.uni.lu/lcsb ---- This message is confidential and may contain privileged information. It is intended for the named recipient only. If you receive it in error please notify me and permanently delete the original message and any copies. ---- > On 3 Feb 2018, at 11:31, Vincent Royer wrote: > > I am trying to install Windows 10 on Ovirt 4.2 > > I pushed the virtio-win iso and vfd files to my ISO domain. I created the VM and loaded the CD rom with the windows install ISO and in the "run once" settings I chose the virtio-win selection as a floppy. However when entering windows setup, there is no drive to install to, and no floppy shows up in the usual place to install the drivers. > > the installation did succeed when selecting "IDE" as the target disk instead of the virtio or virtio-SCSI options. > > Is there any way to install the drivers after the fact and then change the disk mode from IDE to virtio? > > Is the method on this page https://www.ovirt.org/documentation/how-to/virtual-machines/create-a-windows-7-virtual-machine/ > still relevant? Looks like they suggest swapping the CD disk once the windows setup has begun. > > Thanks! > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3509 bytes Desc: not available URL: From vincent at epicenergy.ca Sat Feb 3 22:07:31 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sat, 3 Feb 2018 14:07:31 -0800 Subject: [ovirt-users] Windows 10 floppy not found during setup In-Reply-To: References: Message-ID: Thanks I tried it and it did in fact work. Just threw me off because all the instructions say to load the floppy. My windows install got right up to the point where it was loading the NICs, and then it all came crashing down. I expected the VMs and the engine to be able to share the same NIC, but I suppose it doesn't work that way. More reading... *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Sat, Feb 3, 2018 at 9:58 AM, Christophe TREFOIS < christophe.trefois at uni.lu> wrote: > We had to do the swapping CD trick for Windows 10 yes. > > Regards, > > -- > > Dr Christophe Trefois, Dipl.-Ing. > Technical Specialist / Post-Doc > > UNIVERSIT? DU LUXEMBOURG > > LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE > Campus Belval | House of Biomedicine > 6, avenue du Swing > L-4367 Belvaux > T: +352 46 66 44 6124 <+352%2046%2066%2044%206124> > F: +352 46 66 44 6949 <+352%2046%2066%2044%206949> > http://www.uni.lu/lcsb > > [image: Facebook] [image: Twitter] > [image: Google Plus] > [image: Linkedin] > [image: skype] > > > ---- > This message is confidential and may contain privileged information. > It is intended for the named recipient only. > If you receive it in error please notify me and permanently delete the > original message and any copies. > ---- > > > On 3 Feb 2018, at 11:31, Vincent Royer wrote: > > I am trying to install Windows 10 on Ovirt 4.2 > > I pushed the virtio-win iso and vfd files to my ISO domain. I created > the VM and loaded the CD rom with the windows install ISO and in the "run > once" settings I chose the virtio-win selection as a floppy. However when > entering windows setup, there is no drive to install to, and no floppy > shows up in the usual place to install the drivers. > > the installation did succeed when selecting "IDE" as the target disk > instead of the virtio or virtio-SCSI options. > > Is there any way to install the drivers after the fact and then change the > disk mode from IDE to virtio? > > Is the method on this page https://www.ovirt.org/ > documentation/how-to/virtual-machines/create-a-windows-7-virtual-machine/ > still relevant? Looks like they suggest swapping the CD disk once the > windows setup has begun. > > Thanks! > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From christophe.trefois at uni.lu Sat Feb 3 22:11:11 2018 From: christophe.trefois at uni.lu (Christophe TREFOIS) Date: Sat, 3 Feb 2018 22:11:11 +0000 Subject: [ovirt-users] Windows 10 floppy not found during setup In-Reply-To: References: Message-ID: VMs use a vNIC on ovirtmgmt logical network. There should be nothing to do with the engine itself. It was working fine for us after we are able to load the drivers properly. > On 3 Feb 2018, at 23:07, Vincent Royer wrote: > > Thanks I tried it and it did in fact work. Just threw me off because all the instructions say to load the floppy. > > My windows install got right up to the point where it was loading the NICs, and then it all came crashing down. I expected the VMs and the engine to be able to share the same NIC, but I suppose it doesn't work that way. More reading... > > Vincent Royer > 778-825-1057 > > > > SUSTAINABLE MOBILE ENERGY SOLUTIONS > > > > > On Sat, Feb 3, 2018 at 9:58 AM, Christophe TREFOIS > wrote: > We had to do the swapping CD trick for Windows 10 yes. > > Regards, > -- > > Dr Christophe Trefois, Dipl.-Ing. > Technical Specialist / Post-Doc > > UNIVERSIT? DU LUXEMBOURG > > LUXEMBOURG CENTRE FOR SYSTEMS BIOMEDICINE > Campus Belval | House of Biomedicine > 6, avenue du Swing > L-4367 Belvaux > T: +352 46 66 44 6124 > F: +352 46 66 44 6949 > http://www.uni.lu/lcsb > > > ---- > This message is confidential and may contain privileged information. > It is intended for the named recipient only. > If you receive it in error please notify me and permanently delete the original message and any copies. > ---- > > > >> On 3 Feb 2018, at 11:31, Vincent Royer > wrote: >> >> I am trying to install Windows 10 on Ovirt 4.2 >> >> I pushed the virtio-win iso and vfd files to my ISO domain. I created the VM and loaded the CD rom with the windows install ISO and in the "run once" settings I chose the virtio-win selection as a floppy. However when entering windows setup, there is no drive to install to, and no floppy shows up in the usual place to install the drivers. >> >> the installation did succeed when selecting "IDE" as the target disk instead of the virtio or virtio-SCSI options. >> >> Is there any way to install the drivers after the fact and then change the disk mode from IDE to virtio? >> >> Is the method on this page https://www.ovirt.org/documentation/how-to/virtual-machines/create-a-windows-7-virtual-machine/ >> still relevant? Looks like they suggest swapping the CD disk once the windows setup has begun. >> >> Thanks! >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3509 bytes Desc: not available URL: From ehaas at redhat.com Sun Feb 4 07:30:15 2018 From: ehaas at redhat.com (Edward Haas) Date: Sun, 4 Feb 2018 09:30:15 +0200 Subject: [ovirt-users] NetworkManager with oVirt version 4.2.0 In-Reply-To: References: Message-ID: On Sat, Feb 3, 2018 at 9:06 AM, maoz zadok wrote: > Hello All, > I'm new to oVirt, I'm trying with no success to set up the networking on > an oVirt 4.2.0 node, and I think I'm missing something. > > background: > interfaces em1-4 is bonded to bond0 > VLAN configured on bond0.1 > and bridged to ovirtmgmt for the management interface. > > I'm not sure its updated to version 4.2.0 but I followed this post: > https://www.ovirt.org/documentation/how-to/networking/bonding-vlan-bridge/ > It looks like an old howto, we will need to update or remove it. > > with this setting, the NetworkManager keep starting up on reboot, > and the interfaces are not managed by oVirt (and the nice traffic graphs > are not shown). > For the interfaces to be owned by oVirt, you will need to add the host to Engine. So I would just configure everything up to the VLAN (slaves, bond, VLAN) with NetworkManager prior to adding it to Engine. The bridge should be created when you add the host. (assuming the VLAN you mentioned is your management interface and its ip is the one used by Engine) > > > > > my question: > Is NetworkManager need to be disabled as in the above post? > No (for 4.1 and 4.2) Do I need to manage the networking using (nmtui) NetworkManager? > You better use cockpit or nmcli to configure the node before you add it to Engine. > > Thanks! > Maoz > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehaas at redhat.com Sun Feb 4 08:42:05 2018 From: ehaas at redhat.com (Edward Haas) Date: Sun, 4 Feb 2018 10:42:05 +0200 Subject: [ovirt-users] NetworkManager with oVirt version 4.2.0 In-Reply-To: References: Message-ID: You may be encountering this problem: https://bugzilla.redhat.com/show_bug.cgi?id=1523661 If this is it, you have two options: - Upgrade VDSM to latest 4.2.1. - Define first the VLAN on one of the slaves, add the host to Engine and then modify the network attachment by creating a bond through Engine. Thanks, Edy. On Sun, Feb 4, 2018 at 9:30 AM, Edward Haas wrote: > > > On Sat, Feb 3, 2018 at 9:06 AM, maoz zadok wrote: > >> Hello All, >> I'm new to oVirt, I'm trying with no success to set up the networking on >> an oVirt 4.2.0 node, and I think I'm missing something. >> >> background: >> interfaces em1-4 is bonded to bond0 >> VLAN configured on bond0.1 >> and bridged to ovirtmgmt for the management interface. >> >> I'm not sure its updated to version 4.2.0 but I followed this post: >> https://www.ovirt.org/documentation/how-to/networking/ >> bonding-vlan-bridge/ >> > > It looks like an old howto, we will need to update or remove it. > > >> >> with this setting, the NetworkManager keep starting up on reboot, >> and the interfaces are not managed by oVirt (and the nice traffic graphs >> are not shown). >> > > For the interfaces to be owned by oVirt, you will need to add the host to > Engine. > So I would just configure everything up to the VLAN (slaves, bond, VLAN) > with NetworkManager prior to adding it to Engine. The bridge should be > created when you add the host. > (assuming the VLAN you mentioned is your management interface and its ip > is the one used by Engine) > > >> >> >> >> >> my question: >> Is NetworkManager need to be disabled as in the above post? >> > > No (for 4.1 and 4.2) > > Do I need to manage the networking using (nmtui) NetworkManager? >> > > You better use cockpit or nmcli to configure the node before you add it to > Engine. > > >> >> Thanks! >> Maoz >> >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsoffer at redhat.com Sun Feb 4 13:53:07 2018 From: nsoffer at redhat.com (Nir Soffer) Date: Sun, 04 Feb 2018 13:53:07 +0000 Subject: [ovirt-users] [IOPROCESS] New release for Fedora In-Reply-To: References: Message-ID: On Tue, Jan 30, 2018 at 7:11 PM Nir Soffer wrote: > Hi all, > > I released ioprocess 1.0.0 for Fedora 27 and 28. > > If you are using Fedora, please install the new version from the > updates-testing > and test it. > > Please share your feedback here: > https://bodhi.fedoraproject.org/updates/FEDORA-2018-fbe8141dd2 > This version was replaced with 1.0.2, fixing upgrade issue on RHEL/CentOS. Please share hour feedback here: https://bodhi.fedoraproject.org/updates/FEDORA-2018-5fc2a37e8a Nir -------------- next part -------------- An HTML attachment was scrubbed... URL: From kmisak at gmail.com Sun Feb 4 14:08:48 2018 From: kmisak at gmail.com (Misak Khachatryan) Date: Sun, 4 Feb 2018 18:08:48 +0400 Subject: [ovirt-users] VM paused due unknown storage error In-Reply-To: References: Message-ID: Bump. Best regards, Misak Khachatryan On Wed, Jan 31, 2018 at 2:28 PM, Misak Khachatryan wrote: > And sorry - yes, all hosts are active. > > Best regards, > Misak Khachatryan > > > On Wed, Jan 31, 2018 at 9:17 AM, Sahina Bose wrote: >> Could you provide the output of "gluster volume status" and the gluster >> mount logs to check further? >> Are all the host shown as active in the engine (that is, is the monitoring >> working?) >> >> On Wed, Jan 31, 2018 at 1:07 AM, Misak Khachatryan wrote: >>> >>> Hi, >>> >>> After upgrade to 4.2 i'm getting "VM paused due unknown storage >>> error". When i was upgrading i had some gluster problem with one of >>> the hosts, which i was fixed readding it to gluster peers. Now i see >>> something weir in bricks configuration, see attachment - one of the >>> bricks uses 0% of space. >>> >>> How I can diagnose this? Nothing wrong in logs as I can see. >>> >>> >>> >>> >>> Best regards, >>> Misak Khachatryan >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> From maozza at gmail.com Sun Feb 4 14:49:07 2018 From: maozza at gmail.com (maoz zadok) Date: Sun, 4 Feb 2018 16:49:07 +0200 Subject: [ovirt-users] NetworkManager with oVirt version 4.2.0 In-Reply-To: References: Message-ID: I think that I find a quick workaround to resolve this issue, I remove all the logical interfaces from NM and from sysconfig/network-scripts and restart the server, the VDSM service generates the new file and configuration and everything works as cream. 1. delete all the ifcfg-* files from /etc/sysconfig/network-scripts (except for the ifcfg-lo ) 2. delete the interfaces from NetworkManager as follow: nmcli d delete ovirtmgmt nmcli d delete bond0 nmcli d delete bond0.1 ? On Sun, Feb 4, 2018 at 10:42 AM, Edward Haas wrote: > You may be encountering this problem: https://bugzilla.redhat.com/ > show_bug.cgi?id=1523661 > If this is it, you have two options: > - Upgrade VDSM to latest 4.2.1. > - Define first the VLAN on one of the slaves, add the host to Engine and > then modify the network attachment by creating a bond through Engine. > > Thanks, > Edy. > > On Sun, Feb 4, 2018 at 9:30 AM, Edward Haas wrote: > >> >> >> On Sat, Feb 3, 2018 at 9:06 AM, maoz zadok wrote: >> >>> Hello All, >>> I'm new to oVirt, I'm trying with no success to set up the networking on >>> an oVirt 4.2.0 node, and I think I'm missing something. >>> >>> background: >>> interfaces em1-4 is bonded to bond0 >>> VLAN configured on bond0.1 >>> and bridged to ovirtmgmt for the management interface. >>> >>> I'm not sure its updated to version 4.2.0 but I followed this post: >>> https://www.ovirt.org/documentation/how-to/networking/bondin >>> g-vlan-bridge/ >>> >> >> It looks like an old howto, we will need to update or remove it. >> >> >>> >>> with this setting, the NetworkManager keep starting up on reboot, >>> and the interfaces are not managed by oVirt (and the nice traffic graphs >>> are not shown). >>> >> >> For the interfaces to be owned by oVirt, you will need to add the host to >> Engine. >> So I would just configure everything up to the VLAN (slaves, bond, VLAN) >> with NetworkManager prior to adding it to Engine. The bridge should be >> created when you add the host. >> (assuming the VLAN you mentioned is your management interface and its ip >> is the one used by Engine) >> >> >>> >>> >>> >>> >>> my question: >>> Is NetworkManager need to be disabled as in the above post? >>> >> >> No (for 4.1 and 4.2) >> >> Do I need to manage the networking using (nmtui) NetworkManager? >>> >> >> You better use cockpit or nmcli to configure the node before you add it >> to Engine. >> >> >>> >>> Thanks! >>> Maoz >>> >>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Sun Feb 4 20:01:03 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Sun, 4 Feb 2018 12:01:03 -0800 Subject: [ovirt-users] NetworkManager with oVirt version 4.2.0 In-Reply-To: References: Message-ID: I had these types of issues as well my first time around, and after a failed engine install I haven't been able to get things cleaned up, so I will have to start over. I created a bonded interface on the host before the engine setup. but once I created my first VM and assigned bond0 to it, the engine became inaccessible the moment the VM got an IP from the router. What is the preferred way to setup bonded interfaces? In Cockpit or nmcli before hosted engine setup? Or proceed with only one interface then add the other in engine? Is it possible, for example, to setup bonded interfaces with a static management IP on vlan 50 to access the engine, and let the other VMs grab DHCP IPs on vlan 10? On Feb 3, 2018 11:31 PM, "Edward Haas" wrote: On Sat, Feb 3, 2018 at 9:06 AM, maoz zadok wrote: > Hello All, > I'm new to oVirt, I'm trying with no success to set up the networking on > an oVirt 4.2.0 node, and I think I'm missing something. > > background: > interfaces em1-4 is bonded to bond0 > VLAN configured on bond0.1 > and bridged to ovirtmgmt for the management interface. > > I'm not sure its updated to version 4.2.0 but I followed this post: > https://www.ovirt.org/documentation/how-to/networking/bonding-vlan-bridge/ > It looks like an old howto, we will need to update or remove it. > > with this setting, the NetworkManager keep starting up on reboot, > and the interfaces are not managed by oVirt (and the nice traffic graphs > are not shown). > For the interfaces to be owned by oVirt, you will need to add the host to Engine. So I would just configure everything up to the VLAN (slaves, bond, VLAN) with NetworkManager prior to adding it to Engine. The bridge should be created when you add the host. (assuming the VLAN you mentioned is your management interface and its ip is the one used by Engine) > > > > > my question: > Is NetworkManager need to be disabled as in the above post? > No (for 4.1 and 4.2) Do I need to manage the networking using (nmtui) NetworkManager? > You better use cockpit or nmcli to configure the node before you add it to Engine. > > Thanks! > Maoz > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex at unix1337.com Mon Feb 5 04:28:00 2018 From: Alex at unix1337.com (Alex Bartonek) Date: Sun, 04 Feb 2018 23:28:00 -0500 Subject: [ovirt-users] Problem creating NFS storage domain Message-ID: Running oVirt 4.2. Creating a NFS storage domain, creates but fails with: Failed with error AcquireHostIdFailure Now, this is a new install of openmediavault. I can ssh from the oVirt server, access the NFS share (its mounted) and can create files etc. I'm sure its something small I'm overlooking. Log output below: messages: Feb 4 22:19:09 santa sanlock[948]: 2018-02-04 22:19:09 45660 [18976]: open error -13 /rhev/data-center/mnt/openmediavault.themagicm.com:_webserver__backups/1a7d0d5c-fdfa-4010-aa6d-47cbcf473e3f/dom_md/ids Feb 4 22:19:09 santa sanlock[948]: 2018-02-04 22:19:09 45660 [18976]: s7 open_disk /rhev/data-center/mnt/openmediavault.themagicm.com:_webserver__backups/1a7d0d5c-fdfa-4010-aa6d-47cbcf473e3f/dom_md/ids error -13 Feb 4 22:19:10 santa sanlock[948]: 2018-02-04 22:19:10 45661 [955]: s7 add_lockspace fail result -19 vdsm.log: 2018-02-04 22:17:19,245-0600 ERROR (jsonrpc/2) [storage.TaskManager.Task] (Task='42eff5b1-5be9-4839-b613-f38c8c921d2f') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "", line 2, in attachStorageDomain File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 48, in method ret = func(*args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/hsm.py", line 1197, in attachStorageDomain pool.attachSD(sdUUID) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 982, in attachSD dom.acquireHostId(self.id) File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 797, in acquireHostId self._manifest.acquireHostId(hostId, async) File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 453, in acquireHostId self._domainLock.acquireHostId(hostId, async) File "/usr/lib/python2.7/site-packages/vdsm/storage/clusterlock.py", line 315, in acquireHostId raise se.AcquireHostIdFailure(self._sdUUID, e) AcquireHostIdFailure: Cannot acquire host id: (u'45e2c87e-f006-42f5-a672-e2640e739c80', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) 2018-02-04 22:17:19,246-0600 INFO (jsonrpc/2) [storage.TaskManager.Task] (Task='42eff5b1-5be9-4839-b613-f38c8c921d2f') aborting: Task is aborted: "Cannot acquire host id: (u'45e2c87e-f006-42f5-a672-e2640e739c80', SanlockException(19, 'Sanlock lockspace add failure', 'No such device'))" - code 661 (task:1181) 2018-02-04 22:17:19,247-0600 ERROR (jsonrpc/2) [storage.Dispatcher] FINISH attachStorageDomain error=Cannot acquire host id: (u'45e2c87e-f006-42f5-a672-e2640e739c80', SanlockException(19, 'Sanlock lockspace add failure', 'No such device')) (dispatcher:82) 2018-02-04 22:17:19,247-0600 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call StorageDomain.attach failed (error 661) in 1.01 seconds (__init__:573) sanlock.log: 2018-02-04 22:14:02 45353 [955]: s5 lockspace 2eb84e24-151d-4277-a680-3a081b3dde67:1:/rhev/data-center/mnt/openmediavault.themagicm.com:_webserver__backups/2eb84e24-151d-4277-a680-3a081b3dde67/dom_md/ids:0 2018-02-04 22:14:02 45353 [18524]: open error -13 /rhev/data-center/mnt/openmediavault.themagicm.com:_webserver__backups/2eb84e24-151d-4277-a680-3a081b3dde67/dom_md/ids 2018-02-04 22:14:02 45353 [18524]: s5 open_disk /rhev/data-center/mnt/openmediavault.themagicm.com:_webserver__backups/2eb84e24-151d-4277-a680-3a081b3dde67/dom_md/ids error -13 2018-02-04 22:14:03 45354 [955]: s5 add_lockspace fail result -19 Sent with [ProtonMail](https://protonmail.com) Secure Email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Mon Feb 5 05:38:11 2018 From: sabose at redhat.com (Sahina Bose) Date: Mon, 5 Feb 2018 11:08:11 +0530 Subject: [ovirt-users] VM paused due unknown storage error In-Reply-To: References: Message-ID: Adding gluster-users. On Wed, Jan 31, 2018 at 3:55 PM, Misak Khachatryan wrote: > Hi, > > here is the output from virt3 - problematic host: > > [root at virt3 ~]# gluster volume status > Status of volume: data > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick virt1:/gluster/brick2/data 49152 0 Y > 3536 > Brick virt2:/gluster/brick2/data 49152 0 Y > 3557 > Brick virt3:/gluster/brick2/data 49152 0 Y > 3523 > Self-heal Daemon on localhost N/A N/A Y > 32056 > Self-heal Daemon on virt2 N/A N/A Y > 29977 > Self-heal Daemon on virt1 N/A N/A Y > 1788 > > Task Status of Volume data > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: engine > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick virt1:/gluster/brick1/engine 49153 0 Y > 3561 > Brick virt2:/gluster/brick1/engine 49153 0 Y > 3570 > Brick virt3:/gluster/brick1/engine 49153 0 Y > 3534 > Self-heal Daemon on localhost N/A N/A Y > 32056 > Self-heal Daemon on virt2 N/A N/A Y > 29977 > Self-heal Daemon on virt1 N/A N/A Y > 1788 > > Task Status of Volume engine > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > Status of volume: iso > Gluster process TCP Port RDMA Port Online > Pid > ------------------------------------------------------------ > ------------------ > Brick virt1:/gluster/brick4/iso 49154 0 Y > 3585 > Brick virt2:/gluster/brick4/iso 49154 0 Y > 3592 > Brick virt3:/gluster/brick4/iso 49154 0 Y > 3543 > Self-heal Daemon on localhost N/A N/A Y > 32056 > Self-heal Daemon on virt1 N/A N/A Y > 1788 > Self-heal Daemon on virt2 N/A N/A Y > 29977 > > Task Status of Volume iso > ------------------------------------------------------------ > ------------------ > There are no active volume tasks > > and one of the logs. > > Thanks in advance > > Best regards, > Misak Khachatryan > > > On Wed, Jan 31, 2018 at 9:17 AM, Sahina Bose wrote: > > Could you provide the output of "gluster volume status" and the gluster > > mount logs to check further? > > Are all the host shown as active in the engine (that is, is the > monitoring > > working?) > > > > On Wed, Jan 31, 2018 at 1:07 AM, Misak Khachatryan > wrote: > >> > >> Hi, > >> > >> After upgrade to 4.2 i'm getting "VM paused due unknown storage > >> error". When i was upgrading i had some gluster problem with one of > >> the hosts, which i was fixed readding it to gluster peers. Now i see > >> something weir in bricks configuration, see attachment - one of the > >> bricks uses 0% of space. > >> > >> How I can diagnose this? Nothing wrong in logs as I can see. > >> > >> > >> > >> > >> Best regards, > >> Misak Khachatryan > >> > >> _______________________________________________ > >> Users mailing list > >> Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ddqlo at 126.com Mon Feb 5 07:06:32 2018 From: ddqlo at 126.com (=?GBK?B?tq3H4MH6?=) Date: Mon, 5 Feb 2018 15:06:32 +0800 (CST) Subject: [ovirt-users] active directory and sso In-Reply-To: References: <1c5cffbd.6cd0.161506d46dc.Coremail.ddqlo@126.com> <2cf9122f.38ed.161549f1515.Coremail.ddqlo@126.com> Message-ID: <72916e2d.5cce.16164c9a790.Coremail.ddqlo@126.com> Here are the engine logs: 2018-02-05 14:53:53,681+08 INFO [org.ovirt.engine.core.sso.utils.AuthenticationUtils] (default task-38) [] User test at test.org successfully logged in with scopes: ovirt-app-admin ovirt-app-api ovirt-app-portal ovirt-ext=auth:sequence-priority=~ ovirt-ext=revoke:revoke-all ovirt-ext=token-info:authz-search ovirt-ext=token-info:public-authz-search ovirt-ext=token-info:validate ovirt-ext=token:password-access 2018-02-05 14:53:53,765+08 INFO [org.ovirt.engine.core.bll.aaa.CreateUserSessionCommand] (default task-40) [6961a53b] Running command: CreateUserSessionCommand internal: false. 2018-02-05 14:53:53,775+08 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-40) [6961a53b] EVENT_ID: USER_VDC_LOGIN(30), Correlation ID: 6961a53b, Call Stack: null, Custom Event ID: -1, Message: User test at test.org@test.org logged in. 2018-02-05 14:53:55,305+08 ERROR [org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-60) [] Can't read file '/usr/share/ovirt-engine/files/spice/SpiceVersion_x64.txt' for request '/ovirt-engine/services/files/spice/SpiceVersion_x64.txt', will send a 404 error response. 2018-02-05 14:53:57,379+08 INFO [org.ovirt.engine.core.bll.VmLogonCommand] (default task-21) [4550dbd4-9c26-48fa-8ded-e50cd47a34e1] Running command: VmLogonCommand internal: false. Entities affected : ID: ae5846f6-4f25-4e7a-af2d-02e99599de47 Type: VMAction group CONNECT_TO_VM with role type USER 2018-02-05 14:53:57,400+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default task-21) [4550dbd4-9c26-48fa-8ded-e50cd47a34e1] START, VmLogonVDSCommand(HostName = host, VmLogonVDSCommandParameters:{runAsync='true', hostId='0049362d-39cc-498d-9c7e-f36c5fba20bf', vmId='ae5846f6-4f25-4e7a-af2d-02e99599de47', domain='test.org', password='***', userName='test at test.org@test.org'}), log id: 34439164 2018-02-05 14:53:58,404+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.VmLogonVDSCommand] (default task-21) [4550dbd4-9c26-48fa-8ded-e50cd47a34e1] FINISH, VmLogonVDSCommand, log id: 34439164 2018-02-05 14:53:58,467+08 INFO [org.ovirt.engine.core.bll.SetVmTicketCommand] (default task-23) [48fb921e] Running command: SetVmTicketCommand internal: false. Entities affected : ID: ae5846f6-4f25-4e7a-af2d-02e99599de47 Type: VMAction group CONNECT_TO_VM with role type USER 2018-02-05 14:53:58,469+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default task-23) [48fb921e] START, SetVmTicketVDSCommand(HostName = host, SetVmTicketVDSCommandParameters:{runAsync='true', hostId='0049362d-39cc-498d-9c7e-f36c5fba20bf', vmId='ae5846f6-4f25-4e7a-af2d-02e99599de47', protocol='SPICE', ticket='60qsiE96d7F5', validTime='120', userName='test at test.org', userId='737c7b8b-9503-489b-b32a-10bf8615bc1f', disconnectAction='LOCK_SCREEN'}), log id: 3076856 2018-02-05 14:53:59,108+08 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand] (default task-23) [48fb921e] FINISH, SetVmTicketVDSCommand, log id: 3076856 2018-02-05 14:53:59,116+08 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-23) [48fb921e] EVENT_ID: VM_SET_TICKET(164), Correlation ID: 48fb921e, Call Stack: null, Custom Event ID: -1, Message: User test at test.org@test.org initiated console session for VM win7 2018-02-05 14:54:16,134+08 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler4) [] EVENT_ID: VM_CONSOLE_CONNECTED(167), Correlation ID: null, Call Stack: null, Custom Event ID: -1, Message: User test at test.org is connected to VM win7. At 2018-02-02 14:50:49, "Martin Perina" wrote: On Fri, Feb 2, 2018 at 4:46 AM, ??? wrote: Thanks for the reply. I have completely configured all the things in option 1 which you told. But it seems that sso still does not work. My domain forest is "test.org" and my user is "test". When I login the user portal, I get "test at test.org@test.org" int the top right corner. Should it be "test at test.org"? This is fine, for AD we are using UPN as username (in your case 'test at test.org') and we concatenate this with authz extension name (in your case '@test.org'). Is it possible that engine send wrong user name to the guest agent? Could you please share engine.log from, after you try to login to VM Portal and open console to the VM to investigate? Thanks Martin At 2018-02-01 15:35:57, "Martin Perina" wrote: On Thu, Feb 1, 2018 at 9:13 AM, ??? wrote: Hi, all I am trying to make SSO working with windows7 vm in an ovirt 4.1 environment. Ovirt-guest-agent has been installed in windows7 vm. I have an active directory server of windows2012 and I have configured the engine using "ovirt-engine-extension-aaa-ldap-setup" successfully. The windows7 vm has joined the domain,too. But when I login the userportal using a user created in the AD server, I still have to login the windows7 vm using the same user for the second time. It seems that SSO does not work. Anyone can help me? Thanks! We are not providing full SSO for VMs . At the moment you have 2 options: 1. If you want user to be automatically logged in into a VM, then you need to setup SSO using aaa-ldap extension for AD (please don't forget to answer Yes for question about SSO for VMs in setup tool). Andf of course in a VM you need to have installed and enabled guest agent. Once user logs into VM Portal and clicks on a VM, then he should be automatically logged into it. 2. If you setup kerberos for engine SSO, then you don't need to enter password to loging into VM Portal, but in such case we cannot pass a password into a VM and user are not automatically logged in. Martin _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??2.png Type: image/png Size: 2736 bytes Desc: not available URL: From maozza at gmail.com Mon Feb 5 08:09:58 2018 From: maozza at gmail.com (maoz zadok) Date: Mon, 5 Feb 2018 10:09:58 +0200 Subject: [ovirt-users] Host engine on virtual machine Message-ID: Hi All, What do you think about installing the host engine on a virtual machine hosted on the same cluster managing it? is it make sense? I don't like the alternative to install it on physical hardware, on the other hand, if the host hosting the engine fall, there will be no access to management. Is there a best practice for it? please, share with me/us your implementation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Mon Feb 5 09:27:27 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Mon, 5 Feb 2018 10:27:27 +0100 Subject: [ovirt-users] Host engine on virtual machine In-Reply-To: References: Message-ID: On Mon, Feb 5, 2018 at 9:09 AM, maoz zadok wrote: > Hi All, > What do you think about installing the host engine on a virtual machine > hosted on the same cluster managing it? > is it make sense? > I don't like the alternative to install it on physical hardware, on the > other hand, if the host hosting the engine fall, there will be no access to > management. > Is there a best practice for it? please, share with me/us your > implementation. > > > Yes, it is supported and it is called Self Hosted Engine. See here: https://www.ovirt.org/documentation/self-hosted/Self-Hosted_Engine_Guide/ Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Fri Feb 2 04:40:03 2018 From: recreationh at gmail.com (Terry hey) Date: Fri, 2 Feb 2018 12:40:03 +0800 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: Dear Martin, Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to setup oVirt Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. Therefore, i would like to ask whether oVirt power management support iLO5 or not. If not, do you have any idea to setup power management with HP G10? Regards, Terry 2018-02-01 16:21 GMT+08:00 Martin Perina : > > > On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < > lorenzetto.luca at gmail.com> wrote: > >> Hi, >> >> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. Try >> using the standard ipmi. >> > > ?It's not just an alias, ilo3/ilo4 also have different defaults than > ipmilan. For example if you use ilo4, then by default following is used: > > ? > > ?lanplus=1 > power_wait=4 > > ?So I recommend to start with ilo4 and add any necessary custom options > into Options field. If you need some custom > options, could you please share them with us? It would be very helpful for > us, if needed we could introduce ilo5 with > different defaults then ilo4 > > Thanks > > Martin > > >> Luca >> >> >> >> Il 31 gen 2018 11:14 PM, "Terry hey" ha scritto: >> >>> Dear all, >>> Did oVirt 4.2 Power management support iLO5 as i could not see iLO5 >>> option in Power Management. >>> >>> Regards >>> Terry >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel.jordan at it-novum.com Fri Feb 2 11:13:47 2018 From: marcel.jordan at it-novum.com (Jordan, Marcel) Date: Fri, 2 Feb 2018 12:13:47 +0100 Subject: [ovirt-users] Documentation about vGPU in oVirt 4.2 Message-ID: <83cb8bad-24ed-b982-03e9-fe4c8a33ebd9@it-novum.com> Hi, i have some NVIDIA Tesla P100 and V100 gpu in our oVirt 4.2 cluster and searching for a documentation how to use the new vGPU feature. Is there any documentation out there how i configure it correctly? -- Marcel Jordan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 898 bytes Desc: OpenPGP digital signature URL: From rightkicktech at gmail.com Mon Feb 5 12:19:41 2018 From: rightkicktech at gmail.com (Alex K) Date: Mon, 5 Feb 2018 14:19:41 +0200 Subject: [ovirt-users] ovirt and gateway behavior Message-ID: Hi all, I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The cluster is used to host several VMs. I have observed that when gateway is lost (say the gateway device is down) the ovirt cluster goes down. It seems a bit extreme behavior especially when one does not care if the hosted VMs have connectivity to Internet or not. Can this behavior be disabled? Thanx, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkotas at redhat.com Mon Feb 5 12:52:04 2018 From: pkotas at redhat.com (Petr Kotas) Date: Mon, 5 Feb 2018 13:52:04 +0100 Subject: [ovirt-users] After realizing HA migration, the virtual machine can still get the virtual machine information by using the "vdsm-client host getVMList" instruction on the host before the migration. In-Reply-To: References: <22ca0fd1.aee0.1614c0a03df.Coremail.pym0914@163.com> <42d98cc1.163e.1614ef43bd6.Coremail.pym0914@163.com> <6c18742a.bb33.1615142da93.Coremail.pym0914@163.com> Message-ID: Hi, I have experimented on the issue and figured out the reason for the original issue. You are right, that the vm1 is not properly stopped. This is due to the known issue in the graceful shutdown introduced in the ovirt 4.2. The vm on the host in shutdown are killed, but are not marked as stopped. This results in the behavior you have observed. Luckily, the patch is already done and present in the latest ovirt. However, be ware that gracefully shutting down the host, will result in graceful shutdown of the VMs. This result in engine not migrating them, since they have been terminated gracefully. Hope this helps. Best, Petr On Fri, Feb 2, 2018 at 6:00 PM, Simone Tiraboschi wrote: > > > On Thu, Feb 1, 2018 at 1:06 PM, Pym wrote: > >> The environment on my side may be different from the link. My VM1 can be >> used normally after it is started on host2, but there is still information >> left on host1 that is not cleaned up. >> >> Only the interface and background can still get the information of vm1 on >> host1, but the vm2 has been successfully started on host2, with the HA >> function. >> >> I would like to ask a question, whether the UUID of the virtual machine >> is stored in the database or where is it maintained? Is it not successfully >> deleted after using the HA function? >> >> > I just encounter a similar behavior: > after a reboot of the host 'vdsm-client Host getVMFullList' is still > reporting an old VM that is not visible with 'virsh -r list --all'. > > I filed a bug to track it: > https://bugzilla.redhat.com/show_bug.cgi?id=1541479 > > > >> >> >> >> >> 2018-02-01 16:12:16?"Simone Tiraboschi" ? >> >> >> >> On Thu, Feb 1, 2018 at 2:21 AM, Pym wrote: >> >>> >>> I checked the vm1, he is keep up state, and can be used, but on host1 >>> has after shutdown is a suspended vm1, this cannot be used, this is the >>> problem now. >>> >>> In host1, you can get the information of vm1 using the "vdsm-client Host >>> getVMList", but you can't get the vm1 information using the "virsh list". >>> >>> >> Maybe a side effect of https://bugzilla.redhat.com >> /show_bug.cgi?id=1505399 >> >> Arik? >> >> >> >>> >>> >>> >>> 2018-02-01 07:16:37?"Simone Tiraboschi" ? >>> >>> >>> >>> On Wed, Jan 31, 2018 at 12:46 PM, Pym wrote: >>> >>>> Hi: >>>> >>>> The current environment is as follows: >>>> >>>> Ovirt-engine version 4.2.0 is the source code compilation and >>>> installation. Add two hosts, host1 and host2, respectively. At host1, a >>>> virtual machine is created on vm1, and a vm2 is created on host2 and HA is >>>> configured. >>>> >>>> Operation steps: >>>> >>>> Use the shutdown -r command on host1. Vm1 successfully migrated to >>>> host2. >>>> When host1 is restarted, the following situation occurs: >>>> >>>> The state of the vm2 will be shown in two images, switching from up and >>>> pause. >>>> >>>> When I perform the "vdsm-client Host getVMList" in host1, I will get >>>> the information of vm1. When I execute the "vdsm-client Host getVMList" in >>>> host2, I will get the information of vm1 and vm2. >>>> When I do "virsh list" in host1, there is no virtual machine >>>> information. When I execute "virsh list" at host2, I will get information >>>> of vm1 and vm2. >>>> >>>> How to solve this problem? >>>> >>>> Is it the case that vm1 did not remove the information on host1 during >>>> the migration, or any other reason? >>>> >>> >>> Did you also check if your vms always remained up? >>> In 4.2 we have libvirt-guests service on the hosts which tries to >>> properly shutdown the running VMs on host shutdown. >>> >>> >>>> >>>> Thank you. >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> >>> >>> >> >> >> >> >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34515 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 35900 bytes Desc: not available URL: From gianluca.cecchi at gmail.com Mon Feb 5 13:38:47 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Mon, 5 Feb 2018 14:38:47 +0100 Subject: [ovirt-users] Documentation about vGPU in oVirt 4.2 In-Reply-To: <83cb8bad-24ed-b982-03e9-fe4c8a33ebd9@it-novum.com> References: <83cb8bad-24ed-b982-03e9-fe4c8a33ebd9@it-novum.com> Message-ID: On Fri, Feb 2, 2018 at 12:13 PM, Jordan, Marcel wrote: > Hi, > > i have some NVIDIA Tesla P100 and V100 gpu in our oVirt 4.2 cluster and > searching for a documentation how to use the new vGPU feature. Is there > any documentation out there how i configure it correctly? > > -- > Marcel Jordan > > > Possibly check what would become the official documentation for RHEV 4.2, even if it could not map one-to-one with oVirt Admin guide here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2-beta/html/administration_guide/sect-host_tasks#Preparing_GPU_Passthrough Planning and prerequisites guide here: https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2-Beta/html/planning_and_prerequisites_guide/requirements#pci_device_requirements In oVirt 4.2 release notes I see these bugzilla entries that can help too... https://bugzilla.redhat.com/show_bug.cgi?id=1481007 https://bugzilla.redhat.com/show_bug.cgi?id=1482033 HIH, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Mon Feb 5 13:46:28 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Mon, 05 Feb 2018 13:46:28 +0000 Subject: [ovirt-users] Failed upgrade from 4.1.9 to 4.2.x Message-ID: <29f119b821e5192bc086d5f0ac1b3ffd@devels.es> Hi, We're trying to upgrade from 4.1.9 to 4.2.x and we're bumping into an error we don't know how to solve. As per [1] we run the 'engine-setup' command and it fails with: [ INFO ] Rolling back to the previous PostgreSQL instance (postgresql). [ ERROR ] Failed to execute stage 'Misc configuration': Command '/opt/rh/rh-postgresql95/root/usr/bin/postgresql-setup' failed to execute [ INFO ] Yum Performing yum transaction rollback [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20180205133354-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed As of the /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log file I could see this: * upgrading from 'postgresql.service' to 'rh-postgresql95-postgresql.service' * Upgrading database. ERROR: pg_upgrade tool failed ERROR: Upgrade failed. * See /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log for details. And this file contains this information: Performing Consistency Checks ----------------------------- Checking cluster versions ok Checking database user is the install user ok Checking database connection settings ok Checking for prepared transactions ok Checking for reg* system OID user data types ok Checking for contrib/isn with bigint-passing mismatch ok Checking for invalid "line" user columns ok Creating dump of global objects ok Creating dump of database schemas django engine ovirt_engine_history postgres template1 ok Checking for presence of required libraries fatal Your installation references loadable libraries that are missing from the new installation. You can add these libraries to the new installation, or remove the functions using them from the old installation. A list of problem libraries is in the file: loadable_libraries.txt Failure, exiting I'm attaching full logs FWIW. Also, I'd like to mention that we created two custom triggers on the engine's 'users' table, but as I understand from the error this is not the issue (We upgraded several times within the same minor and we had no issues with that). Could someone shed some light on this error and how to debug it? Thanks. [1]: https://www.ovirt.org/release/4.2.0/ -------------- next part -------------- A non-text attachment was scrubbed... Name: upgrade.tar.gz Type: application/x-gzip Size: 70404 bytes Desc: not available URL: From f.rothenstein at bodden-kliniken.de Mon Feb 5 13:49:52 2018 From: f.rothenstein at bodden-kliniken.de (Frank Rothenstein) Date: Mon, 05 Feb 2018 14:49:52 +0100 Subject: [ovirt-users] vdsmd fails after upgrade 4.1 -> 4.2 Message-ID: <1517838592.1716.15.camel@bodden-kliniken.de> Hi, I'm currently stuck - after upgrading 4.1 to 4.2 I cannot start the host-processes. systemctl start vdsmd fails with following lines in journalctl: Feb 05 14:40:15 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: vdsm: Running wait_for_network Feb 05 14:40:15 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: vdsm: Running run_init_hooks Feb 05 14:40:15 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: vdsm: Running check_is_configured Feb 05 14:40:15 glusternode1.bodden-kliniken.net sasldblistusers2[10440]: DIGEST-MD5 common mech free Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: Error: Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: One of the modules is not configured to work with VDSM. Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: To configure the module use the following: Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: 'vdsm-tool configure [--module module- name]'. Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: If all modules are not configured try to use: Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: 'vdsm-tool configure --force' Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: (The force flag will stop the module's service and start it Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: afterwards automatically to load the new configuration.) Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: abrt is already configured for vdsm Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: lvm is configured for vdsm Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: libvirt is not configured for vdsm yet Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: Current revision of multipath.conf detected, preserving Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: Modules libvirt are not configured Feb 05 14:40:16 glusternode1.bodden-kliniken.net vdsmd_init_common.sh[10414]: vdsm: stopped during execute check_is_configured task (task returned with error code 1). Feb 05 14:40:16 glusternode1.bodden-kliniken.net systemd[1]: vdsmd.service: control process exited, code=exited status=1 Feb 05 14:40:16 glusternode1.bodden-kliniken.net systemd[1]: Failed to start Virtual Desktop Server Manager. -- Subject: Unit vdsmd.service has failed -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit vdsmd.service has failed. -- -- The result is failed. Feb 05 14:40:16 glusternode1.bodden-kliniken.net systemd[1]: Dependency failed for MOM instance configured for VDSM purposes. -- Subject: Unit mom-vdsm.service has failed The suggested "vdsm-tool configure --force" runs w/o errors, the following restart of vdsmd shows the same error. Any hints on that topic? Frank Frank Rothenstein? Systemadministrator Fon: +49 3821 700 125 Fax:?+49 3821 700 190Internet:?www.bodden-kliniken.de E-Mail: f.rothenstein at bodden-kliniken.de _____________________________________________ BODDEN-KLINIKEN Ribnitz-Damgarten GmbH Sandhufe 2 18311 Ribnitz-Damgarten Telefon: 03821-700-0 Telefax: 03821-700-240 E-Mail: info at bodden-kliniken.de Internet: http://www.bodden-kliniken.de Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 079/133/40188 Aufsichtsratsvorsitzende: Carmen Schr?ter, Gesch?ftsf?hrer: Dr. Falko Milski, MBA Der Inhalt dieser E-Mail ist ausschlie?lich f?r den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie bitte, dass jede Form der Ver?ffentlichung, Vervielf?ltigung oder Weitergabe des Inhalts dieser E-Mail unzul?ssig ist. Wir bitten Sie, sofort den Absender zu informieren und die E-Mail zu l?schen. ? ? ? ? BODDEN-KLINIKEN Ribnitz-Damgarten GmbH 2017 *** Virenfrei durch Kerio Mail Server und SOPHOS Antivirus *** -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 547f8827.f9d4cce7.png Type: image/png Size: 18036 bytes Desc: not available URL: From stirabos at redhat.com Mon Feb 5 14:03:58 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 5 Feb 2018 15:03:58 +0100 Subject: [ovirt-users] Failed upgrade from 4.1.9 to 4.2.x In-Reply-To: <29f119b821e5192bc086d5f0ac1b3ffd@devels.es> References: <29f119b821e5192bc086d5f0ac1b3ffd@devels.es> Message-ID: On Mon, Feb 5, 2018 at 2:46 PM, wrote: > Hi, > > We're trying to upgrade from 4.1.9 to 4.2.x and we're bumping into an > error we don't know how to solve. As per [1] we run the 'engine-setup' > command and it fails with: > > [ INFO ] Rolling back to the previous PostgreSQL instance (postgresql). > [ ERROR ] Failed to execute stage 'Misc configuration': Command > '/opt/rh/rh-postgresql95/root/usr/bin/postgresql-setup' failed to execute > [ INFO ] Yum Performing yum transaction rollback > [ INFO ] Stage: Clean up > Log file is located at /var/log/ovirt-engine/setup/ov > irt-engine-setup-20180205133116-sm2xd1.log > [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/ > 20180205133354-setup.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Execution of setup failed > > As of the /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log > file I could see this: > > * upgrading from 'postgresql.service' to 'rh-postgresql95-postgresql.se > rvice' > * Upgrading database. > ERROR: pg_upgrade tool failed > ERROR: Upgrade failed. > * See /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log for details. > > And this file contains this information: > > Performing Consistency Checks > ----------------------------- > Checking cluster versions ok > Checking database user is the install user ok > Checking database connection settings ok > Checking for prepared transactions ok > Checking for reg* system OID user data types ok > Checking for contrib/isn with bigint-passing mismatch ok > Checking for invalid "line" user columns ok > Creating dump of global objects ok > Creating dump of database schemas > django > engine > ovirt_engine_history > postgres > template1 > ok > Checking for presence of required libraries fatal > > Your installation references loadable libraries that are missing from the > new installation. You can add these libraries to the new installation, > or remove the functions using them from the old installation. A list of > problem libraries is in the file: > loadable_libraries.txt > > Failure, exiting > > I'm attaching full logs FWIW. Also, I'd like to mention that we created > two custom triggers on the engine's 'users' table, but as I understand from > the error this is not the issue (We upgraded several times within the same > minor and we had no issues with that). > > Could someone shed some light on this error and how to debug it? > Hi, can you please attach also loadable_libraries.txt ? > > Thanks. > > [1]: https://www.ovirt.org/release/4.2.0/ > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Mon Feb 5 14:08:36 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Mon, 05 Feb 2018 14:08:36 +0000 Subject: [ovirt-users] Failed upgrade from 4.1.9 to 4.2.x In-Reply-To: References: <29f119b821e5192bc086d5f0ac1b3ffd@devels.es> Message-ID: <7a437bf424f3224e68d6fd81d9818aec@devels.es> El 2018-02-05 14:03, Simone Tiraboschi escribi?: > On Mon, Feb 5, 2018 at 2:46 PM, wrote: > >> Hi, >> >> We're trying to upgrade from 4.1.9 to 4.2.x and we're bumping into >> an error we don't know how to solve. As per [1] we run the >> 'engine-setup' command and it fails with: >> >> [ INFO? ] Rolling back to the previous PostgreSQL instance >> (postgresql). >> [ ERROR ] Failed to execute stage 'Misc configuration': Command >> '/opt/rh/rh-postgresql95/root/usr/bin/postgresql-setup' failed to >> execute >> [ INFO? ] Yum Performing yum transaction rollback >> [ INFO? ] Stage: Clean up >> ? ? ? ? ? Log file is located at >> > /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log >> [ INFO? ] Generating answer file >> '/var/lib/ovirt-engine/setup/answers/20180205133354-setup.co [1]nf' >> [ INFO? ] Stage: Pre-termination >> [ INFO? ] Stage: Termination >> [ ERROR ] Execution of setup failed >> >> As of the >> > /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log >> file I could see this: >> >> ?* upgrading from 'postgresql.service' to >> 'rh-postgresql95-postgresql.se [2]rvice' >> ?* Upgrading database. >> ERROR: pg_upgrade tool failed >> ERROR: Upgrade failed. >> ?* See /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log for >> details. >> >> And this file contains this information: >> >> ? Performing Consistency Checks >> ? ----------------------------- >> ? Checking cluster versions? ? ? ? ? ? ? ? ? ? ? ? ? >> ? ? ? ? ?ok >> ? Checking database user is the install user? ? ? ? ? ? ? >> ? ? ok >> ? Checking database connection settings? ? ? ? ? ? ? ? ? >> ? ? ?ok >> ? Checking for prepared transactions? ? ? ? ? ? ? ? ? ? >> ? ? ? ok >> ? Checking for reg* system OID user data types? ? ? ? ? ? ? >> ? ok >> ? Checking for contrib/isn with bigint-passing mismatch? ? ? >> ?ok >> ? Checking for invalid "line" user columns? ? ? ? ? ? ? ? >> ? ? ok >> ? Creating dump of global objects? ? ? ? ? ? ? ? ? ? ? >> ? ? ? ?ok >> ? Creating dump of database schemas >> ? ? django >> ? ? engine >> ? ? ovirt_engine_history >> ? ? postgres >> ? ? template1 >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >> ? ? ? ? ? ? ? ? ok >> ? Checking for presence of required libraries? ? ? ? ? ? ? >> ? ?fatal >> >> ? Your installation references loadable libraries that are missing >> from the >> ? new installation.? You can add these libraries to the new >> installation, >> ? or remove the functions using them from the old installation.? >> A list of >> ? problem libraries is in the file: >> ? loadable_libraries.txt >> >> ? Failure, exiting >> >> I'm attaching full logs FWIW. Also, I'd like to mention that we >> created two custom triggers on the engine's 'users' table, but as I >> understand from the error this is not the issue (We upgraded several >> times within the same minor and we had no issues with that). >> >> Could someone shed some light on this error and how to debug it? > > Hi, > can you please attach also loadable_libraries.txt ? > ? Could not load library "$libdir/plpython2" ERROR: could not access file "$libdir/plpython2": No such file or directory Well, definitely it has to do with the triggers... The trigger uses plpython2u to replicate some entries in a different database. Is there a way I can get rid of this error other than disabling plpython2 before upgrading and re-enabling it after the upgrade? Thanks. > >> Thanks. >> >> ? [1]: https://www.ovirt.org/release/4.2.0/ [3] >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users [4] > > > > Links: > ------ > [1] http://20180205133354-setup.co > [2] http://rh-postgresql95-postgresql.se > [3] https://www.ovirt.org/release/4.2.0/ > [4] http://lists.ovirt.org/mailman/listinfo/users From mperina at redhat.com Mon Feb 5 14:48:40 2018 From: mperina at redhat.com (Martin Perina) Date: Mon, 5 Feb 2018 15:48:40 +0100 Subject: [ovirt-users] Failed upgrade from 4.1.9 to 4.2.x In-Reply-To: <7a437bf424f3224e68d6fd81d9818aec@devels.es> References: <29f119b821e5192bc086d5f0ac1b3ffd@devels.es> <7a437bf424f3224e68d6fd81d9818aec@devels.es> Message-ID: On Mon, Feb 5, 2018 at 3:08 PM, wrote: > El 2018-02-05 14:03, Simone Tiraboschi escribi?: > >> On Mon, Feb 5, 2018 at 2:46 PM, wrote: >> >> Hi, >>> >>> We're trying to upgrade from 4.1.9 to 4.2.x and we're bumping into >>> an error we don't know how to solve. As per [1] we run the >>> 'engine-setup' command and it fails with: >>> >>> [ INFO ] Rolling back to the previous PostgreSQL instance >>> (postgresql). >>> [ ERROR ] Failed to execute stage 'Misc configuration': Command >>> '/opt/rh/rh-postgresql95/root/usr/bin/postgresql-setup' failed to >>> execute >>> [ INFO ] Yum Performing yum transaction rollback >>> [ INFO ] Stage: Clean up >>> Log file is located at >>> >>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log >> >>> [ INFO ] Generating answer file >>> '/var/lib/ovirt-engine/setup/answers/20180205133354-setup.co [1]nf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> [ ERROR ] Execution of setup failed >>> >>> As of the >>> >>> /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log >> >>> file I could see this: >>> >>> * upgrading from 'postgresql.service' to >>> 'rh-postgresql95-postgresql.se [2]rvice' >>> * Upgrading database. >>> ERROR: pg_upgrade tool failed >>> ERROR: Upgrade failed. >>> * See /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log for >>> details. >>> >>> And this file contains this information: >>> >>> Performing Consistency Checks >>> ----------------------------- >>> Checking cluster versions >>> ok >>> Checking database user is the install user >>> ok >>> Checking database connection settings >>> ok >>> Checking for prepared transactions >>> ok >>> Checking for reg* system OID user data types >>> ok >>> Checking for contrib/isn with bigint-passing mismatch >>> ok >>> Checking for invalid "line" user columns >>> ok >>> Creating dump of global objects >>> ok >>> Creating dump of database schemas >>> django >>> engine >>> ovirt_engine_history >>> postgres >>> template1 >>> >>> ok >>> Checking for presence of required libraries >>> fatal >>> >>> Your installation references loadable libraries that are missing >>> from the >>> new installation. You can add these libraries to the new >>> installation, >>> or remove the functions using them from the old installation. >>> A list of >>> problem libraries is in the file: >>> loadable_libraries.txt >>> >>> Failure, exiting >>> >>> I'm attaching full logs FWIW. Also, I'd like to mention that we >>> created two custom triggers on the engine's 'users' table, but as I >>> understand from the error this is not the issue (We upgraded several >>> times within the same minor and we had no issues with that). >>> >>> Could someone shed some light on this error and how to debug it? >>> >> >> Hi, >> can you please attach also loadable_libraries.txt ? >> >> > > Could not load library "$libdir/plpython2" > ERROR: could not access file "$libdir/plpython2": No such file or > directory > ?Hmm, you probably need to install rh-postgresql95-postgresql-plpython package. This is not installed by default with oVirt as we don't use it ? > > Well, definitely it has to do with the triggers... The trigger uses > plpython2u to replicate some entries in a different database. Is there a > way I can get rid of this error other than disabling plpython2 before > upgrading and re-enabling it after the upgrade? > > Thanks. > > >> Thanks. >>> >>> [1]: https://www.ovirt.org/release/4.2.0/ [3] >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users [4] >>> >> >> >> >> Links: >> ------ >> [1] http://20180205133354-setup.co >> [2] http://rh-postgresql95-postgresql.se >> [3] https://www.ovirt.org/release/4.2.0/ >> [4] http://lists.ovirt.org/mailman/listinfo/users >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Mon Feb 5 14:48:57 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Mon, 5 Feb 2018 15:48:57 +0100 Subject: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated Message-ID: Hello, i'm starting the implementation of our disaster recovery site with RHV 4.1.latest for our production environment. Our production setup is very easy, with self hosted engine on dc KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA and EMC VNX8000. Both storage arrays supports replication via their own replication protocols (SRDF, MirrorView), so we'd like to delegate to them the replication of data to the remote site, which is located on another remote datacenter. In KVMPD DC we have some storage domains that contains non critical VMs, which we don't want to replicate to remote site (in case of failure they have a low priority and will be restored from a backup). In our setup we won't replicate them, so will be not available for attachment on remote site. Can be this be an issue? Do we require to replicate everything? What about master domain? Do i require that the master storage domain stays on a replicated volume or can be any of the available ones? I've seen that since 4.1 there's an API for updating OVF_STORE disks. Do we require to invoke it with a frequency that is the compatible with the replication frequency on storage side. We set at the moment RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets updated with the required frequency? I've seen a recent presentation by Maor Lipchuk that is showing the automagic ansible role for disaster recovery: -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From pkotas at redhat.com Mon Feb 5 14:49:07 2018 From: pkotas at redhat.com (Petr Kotas) Date: Mon, 5 Feb 2018 15:49:07 +0100 Subject: [ovirt-users] vdsmd fails after upgrade 4.1 -> 4.2 In-Reply-To: <1517838592.1716.15.camel@bodden-kliniken.de> References: <1517838592.1716.15.camel@bodden-kliniken.de> Message-ID: Hi Frank, can you please send a vdsm logs? The 4.2 release added the little different deployment from the engine. Now the ansible is also called. Although I am not sure if this is your case. I would go for entirely removing the vdsm and installing it from scratch if it is possible for you. This could solve your issue. Looking forward to hear from you. Petr On Mon, Feb 5, 2018 at 2:49 PM, Frank Rothenstein < f.rothenstein at bodden-kliniken.de> wrote: > Hi, > > I'm currently stuck - after upgrading 4.1 to 4.2 I cannot start the > host-processes. > systemctl start vdsmd fails with following lines in journalctl: > > > > Feb 05 14:40:15 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: vdsm: Running wait_for_network > Feb 05 14:40:15 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: vdsm: Running run_init_hooks > Feb 05 14:40:15 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: vdsm: Running check_is_configured > Feb 05 14:40:15 glusternode1.bodden-kliniken.net > sasldblistusers2[10440]: DIGEST-MD5 common mech free > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: Error: > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: One of the modules is not configured to > work with VDSM. > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: To configure the module use the following: > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: 'vdsm-tool configure [--module module- > name]'. > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: If all modules are not configured try to > use: > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: 'vdsm-tool configure --force' > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: (The force flag will stop the module's > service and start it > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: afterwards automatically to load the new > configuration.) > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: abrt is already configured for vdsm > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: lvm is configured for vdsm > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: libvirt is not configured for vdsm yet > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: Current revision of multipath.conf > detected, preserving > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: Modules libvirt are not configured > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > vdsmd_init_common.sh[10414]: vdsm: stopped during execute > check_is_configured task (task returned with error code 1). > Feb 05 14:40:16 glusternode1.bodden-kliniken.net systemd[1]: > vdsmd.service: control process exited, code=exited status=1 > Feb 05 14:40:16 glusternode1.bodden-kliniken.net systemd[1]: Failed to > start Virtual Desktop Server Manager. > -- Subject: Unit vdsmd.service has failed > -- Defined-By: systemd > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel > -- > -- Unit vdsmd.service has failed. > -- > -- The result is failed. > Feb 05 14:40:16 glusternode1.bodden-kliniken.net systemd[1]: Dependency > failed for MOM instance configured for VDSM purposes. > -- Subject: Unit mom-vdsm.service has failed > > > > The suggested "vdsm-tool configure --force" runs w/o errors, the > following restart of vdsmd shows the same error. > > Any hints on that topic? > > Frank > > > > Frank Rothenstein > > Systemadministrator > Fon: +49 3821 700 125 <+49%203821%20700125> > Fax: +49 3821 700 190 <+49%203821%20700190> > Internet: www.bodden-kliniken.de > E-Mail: f.rothenstein at bodden-kliniken.de > > > _____________________________________________ > BODDEN-KLINIKEN Ribnitz-Damgarten GmbH > Sandhufe 2 > 18311 Ribnitz-Damgarten > > Telefon: 03821-700-0 > Telefax: 03821-700-240 > > E-Mail: info at bodden-kliniken.de > Internet: http://www.bodden-kliniken.de > > Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: > 079/133/40188 > Aufsichtsratsvorsitzende: Carmen Schr?ter, Gesch?ftsf?hrer: Dr. Falko > Milski, MBA > > Der Inhalt dieser E-Mail ist ausschlie?lich f?r den bezeichneten > Adressaten bestimmt. Wenn Sie nicht der > vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, > beachten Sie bitte, dass jede > Form der Ver?ffentlichung, Vervielf?ltigung oder Weitergabe des Inhalts > dieser E-Mail unzul?ssig ist. > Wir bitten Sie, sofort den Absender zu informieren und die E-Mail zu > l?schen. > > ? BODDEN-KLINIKEN Ribnitz-Damgarten GmbH 2017 > *** Virenfrei durch Kerio Mail Server und SOPHOS Antivirus *** > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 547f8827.f9d4cce7.png Type: image/png Size: 18036 bytes Desc: not available URL: From lorenzetto.luca at gmail.com Mon Feb 5 14:55:23 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Mon, 5 Feb 2018 15:55:23 +0100 Subject: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated In-Reply-To: References: Message-ID: Hello, i'm starting the implementation of our disaster recovery site with RHV 4.1.latest for our production environment. Our production setup is very easy, with self hosted engine on dc KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA and EMC VNX8000. Both storage arrays supports replication via their own replication protocols (SRDF, MirrorView), so we'd like to delegate to them the replication of data to the remote site, which is located on another remote datacenter. In KVMPD DC we have some storage domains that contains non critical VMs, which we don't want to replicate to remote site (in case of failure they have a low priority and will be restored from a backup). In our setup we won't replicate them, so will be not available for attachment on remote site. Can be this be an issue? Do we require to replicate everything? What about master domain? Do i require that the master storage domain stays on a replicated volume or can be any of the available ones? I've seen that since 4.1 there's an API for updating OVF_STORE disks. Do we require to invoke it with a frequency that is the compatible with the replication frequency on storage side. We set at the moment RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets updated with the required frequency? I've seen a recent presentation by Maor Lipchuk that is showing the "automagic" ansible role for disaster recovery: https://www.slideshare.net/maorlipchuk/ovirt-dr-site-tosite-using-ansible It's also related with some youtube presentations demonstrating a real DR plan execution. But what i've seen is that Maor is explicitly talking about 4.2 release. Does that role works only with >4.2 releases or can be used also on earlier (4.1) versions? I've tested a manual flow of replication + recovery through Import SD followed by Import VM and worked like a charm. Using a prebuilt ansible role will reduce my effort on creating a new automation for doing this. Anyone has experiences like mine? Thank you for the help you may provide, i'd like to contribute back to you with all my findings and with an usable tool (also integrated with storage arrays if possible). Luca (Sorry for duplicate email, ctrl-enter happened before mail completion) -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From f.thommen at dkfz-heidelberg.de Mon Feb 5 15:10:15 2018 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Mon, 5 Feb 2018 16:10:15 +0100 Subject: [ovirt-users] oVirt Upgrade 4.1 -> 4.2 fails with YUM dependency problems (CentOS) In-Reply-To: References: Message-ID: <8f89e91f-8e9a-46f0-f0e5-1f7fb6997ae0@dkfz-heidelberg.de> Following the minor release upgrade instructions on https://www.ovirt.org/documentation/upgrade-guide/chap-Updates_between_Minor_Releases/ solved this issue. Now we are bumping into an other issue, for which I'll probably open an other thread. frank On 02/02/2018 05:33 PM, Chas Hockenbarger wrote: > I haven't tried this yet, but looking at the detailed error, the > implication is that your current install is less than 4.1.7, which is > where the conflict is. Have you tried updating to > 4.1.7 before upgrading? > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From ccox at endlessnow.com Mon Feb 5 16:02:39 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Mon, 5 Feb 2018 10:02:39 -0600 Subject: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up In-Reply-To: References: <0e21836d-c660-227b-c7a8-d57a09d40d2d@endlessnow.com> <18dab82b-40ec-e034-74fb-2c2db7372f7c@endlessnow.com> <43c20a8a-6e77-19b6-283e-6da9f3929fd5@endlessnow.com> Message-ID: Forgive the top post. I guess what I need to know now is whether there is a recovery path that doesn't lead to total loss of the VMs that are currently in the "Unknown" "Not responding" state. We are planning a total oVirt shutdown. I just would like to know if we've effectively lot those VMs or not. Again, the VMs are currently "up". And we use a file backup process, so in theory they can be restored, just somewhat painfully, from scratch. But if somebody knows if we shutdown all the bad VMs and the blade, is there someway oVirt can know the VMs are "ok" to start up?? Will changing their state directly to "down" in the db stick if the blade is down? That is, will we get to a known state where the VMs can actually be started and brought back into a known state? Right now, we're feeling there's a good chance we will not be able to recover these VMs, even though they are "up" right now. I really need some way to force oVirt into an integral state, even if it means we take the whole thing down. Possible? On 01/25/2018 06:57 PM, Christopher Cox wrote: > > > On 01/25/2018 04:57 PM, Douglas Landgraf wrote: >> On Thu, Jan 25, 2018 at 5:12 PM, Christopher Cox >> wrote: >>> On 01/25/2018 02:25 PM, Douglas Landgraf wrote: >>>> >>>> On Wed, Jan 24, 2018 at 10:18 AM, Christopher Cox >>>> wrote: >>>>> >>>>> Would restarting vdsm on the node in question help fix this? >>>>> Again, all >>>>> the >>>>> VMs are up on the node.? Prior attempts to fix this problem have >>>>> left the >>>>> node in a state where I can issue the "has been rebooted" command >>>>> to it, >>>>> it's confused. >>>>> >>>>> So... node is up.? All VMs are up.? Can't issue "has been rebooted" to >>>>> the >>>>> node, all VMs show Unknown and not responding but they are up. >>>>> >>>>> Chaning the status is the ovirt db to 0 works for a second and then it >>>>> goes >>>>> immediately back to 8 (which is why I'm wondering if I should restart >>>>> vdsm >>>>> on the node). >>>> >>>> >>>> It's not recommended to change db manually. >>>> >>>>> >>>>> Oddly enough, we're running all of this in production.? So, >>>>> watching it >>>>> all >>>>> go down isn't the best option for us. >>>>> >>>>> Any advice is welcome. >>>> >>>> >>>> >>>> We would need to see the node/engine logs, have you found any error in >>>> the vdsm.log >>>> (from nodes) or engine.log? Could you please share the error? >>> >>> >>> >>> In short, the error is our ovirt manager lost network (our problem) and >>> crashed hard (hardware issue on the server)..? On bring up, we had some >>> network changes (that caused the lost network problem) so our LACP >>> bond was >>> down for a bit while we were trying to bring it up (noting the ovirt >>> manager >>> is up while we're reestablishing the network on the switch side). >>> >>> In other word, that's the "error" so to speak that got us to where we >>> are. >>> >>> Full DEBUG enabled on the logs... The error messages seem obvious to >>> me.. >>> starts like this (nothing the ISO DOMAIN was coming off an NFS mount >>> off the >>> ovirt management server... yes... we know... we do have plans to move >>> that). >>> >>> So on the hypervisor node itself, from the vdsm.log (vdsm.log.33.xz): >>> >>> (hopefully no surprise here) >>> >>> Thread-2426633::WARNING::2018-01-23 >>> 13:50:56,672::fileSD::749::Storage.scanDomains::(collectMetaFiles) >>> Could not >>> collect metadata file for domain path >>> /rhev/data-center/mnt/d0lppc129.skopos.me:_var_lib_exports_iso-20160408002844 >>> >>> Traceback (most recent call last): >>> ?? File "/usr/share/vdsm/storage/fileSD.py", line 735, in >>> collectMetaFiles >>> ???? sd.DOMAIN_META_DATA)) >>> ?? File "/usr/share/vdsm/storage/outOfProcess.py", line 121, in glob >>> ???? return self._iop.glob(pattern) >>> ?? File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", >>> line 536, >>> in glob >>> ???? return self._sendCommand("glob", {"pattern": pattern}, >>> self.timeout) >>> ?? File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", >>> line 421, >>> in _sendCommand >>> ???? raise Timeout(os.strerror(errno.ETIMEDOUT)) >>> Timeout: Connection timed out >>> Thread-27::ERROR::2018-01-23 >>> 13:50:56,672::sdc::145::Storage.StorageDomainCache::(_findDomain) domain >>> e5ecae2f-5a06-4743-9a43-e74d83992c35 not found >>> Traceback (most recent call last): >>> ?? File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain >>> ???? dom = findMethod(sdUUID) >>> ?? File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain >>> ???? return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID)) >>> ?? File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath >>> ???? raise se.StorageDomainDoesNotExist(sdUUID) >>> StorageDomainDoesNotExist: Storage domain does not exist: >>> (u'e5ecae2f-5a06-4743-9a43-e74d83992c35',) >>> Thread-27::ERROR::2018-01-23 >>> 13:50:56,673::monitor::276::Storage.Monitor::(_monitorDomain) Error >>> monitoring domain e5ecae2f-5a06-4743-9a43-e74d83992c35 >>> Traceback (most recent call last): >>> ?? File "/usr/share/vdsm/storage/monitor.py", line 272, in >>> _monitorDomain >>> ???? self._performDomainSelftest() >>> ?? File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 769, in >>> wrapper >>> ???? value = meth(self, *a, **kw) >>> ?? File "/usr/share/vdsm/storage/monitor.py", line 339, in >>> _performDomainSelftest >>> ???? self.domain.selftest() >>> ?? File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__ >>> ???? return getattr(self.getRealDomain(), attrName) >>> ?? File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain >>> ???? return self._cache._realProduce(self._sdUUID) >>> ?? File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce >>> ???? domain = self._findDomain(sdUUID) >>> ?? File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain >>> ???? dom = findMethod(sdUUID) >>> ?? File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain >>> ???? return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID)) >>> ?? File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath >>> ???? raise se.StorageDomainDoesNotExist(sdUUID) >>> StorageDomainDoesNotExist: Storage domain does not exist: >>> (u'e5ecae2f-5a06-4743-9a43-e74d83992c35',) >>> >>> >>> Again, all the hypervisor nodes will complain about having the NFS >>> area for >>> ISO DOMAIN now gone.? Remember the ovirt manager node held this and >>> it has >>> now network has gone out and the node crashed (note: the ovirt node (the >>> actual server box) shouldn't crash due to the network outage, but it >>> did. >> >> >> I have added VDSM people in this thread to review it. I am assuming >> the network changes (during the crash) still make the storage domain >> available for the nodes. > > Ideally, nothing was lost node wise (neither LAN nor iSCSI), just the > ovirt manager lost its network connection.? So the only thing, as I > mentioned, storage wise that was lost was the ISO DOMAIN which was NFS'd > off the ovirt manager. > >> >>> >>> So here is the engine collapse as it lost network connectivity >>> (before the >>> server actually crashed hard). >>> >>> 2018-01-23 13:45:33,666 ERROR >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-87) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VDSM d0lppn067 command failed: >>> Heartbeat >>> exeeded >>> 2018-01-23 13:45:33,666 ERROR >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-10) [21574461] Correlation ID: null, Call >>> Stack: null, Custom Event ID: -1, Message: VDSM d0lppn072 command >>> failed: >>> Heartbeat exeeded >>> 2018-01-23 13:45:33,666 ERROR >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d] Correlation ID: null, Call >>> Stack: null, Custom Event ID: -1, Message: VDSM d0lppn066 command >>> failed: >>> Heartbeat exeeded >>> 2018-01-23 13:45:33,667 ERROR >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand] >>> (DefaultQuartzScheduler_Worker-87) [] Command >>> 'GetStatsVDSCommand(HostName = >>> d0lppn067, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', >>> hostId='f99c68c8-b0e8-437b-8cd9-ebaddaaede96', >>> vds='Host[d0lppn067,f99c68c8-b0e8-437b-8cd9-ebaddaaede96]'})' execution >>> failed: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,667 ERROR >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand] >>> (DefaultQuartzScheduler_Worker-10) [21574461] Command >>> 'GetStatsVDSCommand(HostName = d0lppn072, >>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', >>> hostId='fdc00296-973d-4268-bd79-6dac535974e0', >>> vds='Host[d0lppn072,fdc00296-973d-4268-bd79-6dac535974e0]'})' execution >>> failed: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,667 ERROR >>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand] >>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d] Command >>> 'GetStatsVDSCommand(HostName = d0lppn066, >>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', >>> hostId='14abf559-4b62-4ebd-a345-77fa9e1fa3ae', >>> vds='Host[d0lppn066,14abf559-4b62-4ebd-a345-77fa9e1fa3ae]'})' execution >>> failed: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,669 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-87) []? Failed getting vds stats, >>> vds='d0lppn067'(f99c68c8-b0e8-437b-8cd9-ebaddaaede96): >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,669 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-10) [21574461]? Failed getting vds stats, >>> vds='d0lppn072'(fdc00296-973d-4268-bd79-6dac535974e0): >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,669 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d]? Failed getting vds stats, >>> vds='d0lppn066'(14abf559-4b62-4ebd-a345-77fa9e1fa3ae): >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,671 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-10) [21574461] Failure to refresh Vds >>> runtime >>> info: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,671 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d] Failure to refresh Vds >>> runtime >>> info: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,671 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-87) [] Failure to refresh Vds runtime >>> info: >>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> 2018-01-23 13:45:33,671 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d] Exception: >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand.executeVdsBrokerCommand(GetStatsVDSCommand.java:21) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) >>> [dal.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsStats(HostMonitoring.java:472) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:114) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refresh(HostMonitoring.java:84) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:227) >>> [vdsbroker.jar:] >>> ???????? at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source) >>> [:1.8.0_102] >>> ???????? at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> >>> [rt.jar:1.8.0_102] >>> ???????? at java.lang.reflect.Method.invoke(Method.java:498) >>> [rt.jar:1.8.0_102] >>> ???????? at >>> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81) >>> >>> [scheduler.jar:] >>> ???????? at >>> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52) >>> [scheduler.jar:] >>> ???????? at org.quartz.core.JobRunShell.run(JobRunShell.java:213) >>> [quartz.jar:] >>> ???????? at >>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) >>> >>> [quartz.jar:] >>> >>> 2018-01-23 13:45:33,671 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-10) [21574461] Exception: >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand.executeVdsBrokerCommand(GetStatsVDSCommand.java:21) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) >>> [dal.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsStats(HostMonitoring.java:472) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:114) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refresh(HostMonitoring.java:84) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:227) >>> [vdsbroker.jar:] >>> ???????? at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown Source) >>> [:1.8.0_102] >>> ???????? at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> >>> [rt.jar:1.8.0_102] >>> ???????? at java.lang.reflect.Method.invoke(Method.java:498) >>> [rt.jar:1.8.0_102] >>> ???????? at >>> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81) >>> >>> [scheduler.jar:] >>> ???????? at >>> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52) >>> [scheduler.jar:] >>> ???????? at org.quartz.core.JobRunShell.run(JobRunShell.java:213) >>> [quartz.jar:] >>> ???????? at >>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) >>> >>> [quartz.jar:] >>> >>> 2018-01-23 13:45:33,671 ERROR >>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>> (DefaultQuartzScheduler_Worker-87) [] Exception: >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand.executeVdsBrokerCommand(GetStatsVDSCommand.java:21) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) >>> [dal.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467) >>> >>> [vdsbroker.jar:] >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsStats(HostMonitoring.java:472) >>> >>> [vdsbroker.jar:] >>> >>> >>> >>> >>> Here are the engine logs show problem with node d0lppn065, the VMs >>> first go >>> to "Unknown" then then "Unknown" plus "not responding": >>> >>> 2018-01-23 14:48:00,712 ERROR >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (org.ovirt.thread.pool-8-thread-28) [] Correlation ID: null, Call Stack: >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>> org.ovirt.vdsm.jsonrpc.client.ClientConnection >>> Exception: Connection failed >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkException(VdsBrokerCommand.java:157) >>> >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:120) >>> >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) >>> >>> ???????? at >>> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467) >>> >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher.fetch(VmsStatisticsFetcher.java:27) >>> >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:35) >>> >>> ???????? at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown Source) >>> ???????? at >>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>> >>> ???????? at java.lang.reflect.Method.invoke(Method.java:498) >>> ???????? at >>> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81) >>> >>> ???????? at >>> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52) >>> ???????? at org.quartz.core.JobRunShell.run(JobRunShell.java:213) >>> ???????? at >>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) >>> >>> Caused by: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: >>> Connection failed >>> ???????? at >>> org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.connect(ReactorClient.java:155) >>> >>> ???????? at >>> org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.getClient(JsonRpcClient.java:134) >>> >>> ???????? at >>> org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.call(JsonRpcClient.java:81) >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.jsonrpc.FutureMap.(FutureMap.java:70) >>> >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer.getAllVmStats(JsonRpcVdsServer.java:331) >>> >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:20) >>> >>> ???????? at >>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110) >>> >>> ???????? ... 12 more >>> , Custom Event ID: -1, Message: Host d0lppn065 is non responsive. >>> 2018-01-23 14:48:00,713 INFO >>> [org.ovirt.engine.core.bll.VdsEventListener] >>> (org.ovirt.thread.pool-8-thread-1) [] ResourceManager::vdsNotResponding >>> entered for Host '2797cae7-6886-4898-a5e4-23361ce03a90', '10.32.0.65' >>> 2018-01-23 14:48:00,713 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (org.ovirt.thread.pool-8-thread-36) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM vtop3 was set to the Unknown >>> status. >>> >>> ...etc... (sorry about the wraps below) >>> >>> 2018-01-23 14:59:07,817 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '30f7af86-c2b9-41c3-b2c5-49f5bbdd0e27'(d0lpvd070) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:07,819 INFO >>> [org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher] >>> (DefaultQuartzScheduler_Worker-74) [] Fetched 15 VMs from VDS >>> '8cb119c5-b7f0-48a3-970a-205d96b2e940' >>> 2018-01-23 14:59:07,936 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lpvd070 is not responding. >>> 2018-01-23 14:59:07,939 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> 'ebc5bb82-b985-451b-8313-827b5f40eaf3'(d0lpvd039) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,032 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lpvd039 is not responding. >>> 2018-01-23 14:59:08,038 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '494c4f9e-1616-476a-8f66-a26a96b76e56'(vtop3) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,134 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM vtop3 is not responding. >>> 2018-01-23 14:59:08,136 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> 'eaeaf73c-d9e2-426e-a2f2-7fcf085137b0'(d0lpvw059) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,237 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lpvw059 is not responding. >>> 2018-01-23 14:59:08,239 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '8308a547-37a1-4163-8170-f89b6dc85ba8'(d0lpvm058) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,326 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lpvm058 is not responding. >>> 2018-01-23 14:59:08,328 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '3d544926-3326-44e1-8b2a-ec632f51112a'(d0lqva056) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,400 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lqva056 is not responding. >>> 2018-01-23 14:59:08,402 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '989e5a17-789d-4eba-8a5e-f74846128842'(d0lpva078) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,472 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lpva078 is not responding. >>> 2018-01-23 14:59:08,474 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '050a71c1-9e65-43c6-bdb2-18eba571e2eb'(d0lpvw077) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,545 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lpvw077 is not responding. >>> 2018-01-23 14:59:08,547 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> 'c3b497fd-6181-4dd1-9acf-8e32f981f769'(d0lpva079) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,621 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lpva079 is not responding. >>> 2018-01-23 14:59:08,623 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '7cd22b39-feb1-4c6e-8643-ac8fb0578842'(d0lqva034) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,690 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lqva034 is not responding. >>> 2018-01-23 14:59:08,692 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '2ab9b1d8-d1e8-4071-a47c-294e586d2fb6'(d0lpvd038) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,763 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lpvd038 is not responding. >>> 2018-01-23 14:59:08,768 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> 'ecb4e795-9eeb-4cdc-a356-c1b9b32af5aa'(d0lqva031) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,836 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lqva031 is not responding. >>> 2018-01-23 14:59:08,838 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '1a361727-1607-43d9-bd22-34d45b386d3e'(d0lqva033) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,911 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM d0lqva033 is not responding. >>> 2018-01-23 14:59:08,913 INFO >>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>> (DefaultQuartzScheduler_Worker-75) [] VM >>> '0cd65f90-719e-429e-a845-f425612d7b14'(vtop4) moved from 'Up' --> >>> 'NotResponding' >>> 2018-01-23 14:59:08,984 WARN >>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>> null, Custom Event ID: -1, Message: VM vtop4 is not responding. >>> >>>> >>>> Probably it's time to think to upgrade your environment from 3.6. >>> >>> >>> I know.? But from a production standpoint mid-2016 wasn't that long ago. >>> And 4 was just coming out of beta at the time. >>> >>> We were upgrading from 3.4 to 3.6.? And it took a long time (again, >>> because >>> it's all "live").? Trust me, the move to 4.0 was discussed, it was >>> just a >>> timing thing. >>> >>> With that said, I do "hear you"....and certainly it's being >>> discussed. We >>> just don't see a "good" migration path... we see a slow path (moving >>> nodes >>> out, upgrading, etc.) and knowing that as with all things, nobody can >>> guarantee "success", which would be a very bad thing.? So going from >>> working >>> 3.6 to totally (potential) broken 4.2, isn't going to impress anyone >>> here, >>> you know?? If all goes according to our best guesses, then great, but >>> when >>> things go bad, and the chance is not insignificant, well... I'm just not >>> quite prepared with my r?sum? if you know what I mean. >>> >>> Don't get me wrong, our move from 3.4 to 3.6 had some similar risks, >>> but we >>> also migrated to whole new infrastructure, a luxury we will not have >>> this >>> time.? And somehow 3.4 to 3.6 doesn't sound as risky as 3.6 to 4.2. >> >> I see your concern. However,? keep your system updated with recent >> software is something I would recommend. You could setup a parallel >> 4.2 env and move the VMS slowly from 3.6. > > Understood.? But would people want software that changes so quickly? > This isn't like moving from RH 7.2 to 7.3 in a matter of months, it's > more like moving from major release to major release in a matter of > months and doing again potentially in a matter of months.? Granted we're > running oVirt and not RHV, so maybe we should be on the Fedora style > upgrade plan.? Just not conducive to an enterprise environment (oVirt > people, stop laughing). > >> >>> >>> Is there a path from oVirt to RHEV?? Every bit of help we get helps >>> us in >>> making that decision as well, which I think would be a very good >>> thing for >>> both of us. (I inherited all this oVirt and I was the "guy" doing the >>> 3.4 to >>> 3.6 with the all new infrastructure). >> >> Yes, you can import your setup to RHEV. > > Good to know. Because of the fragility (support wise... I'm mean our > oVirt has been rock solid, apart from rare glitches like this), we may > follow this path. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From stirabos at redhat.com Mon Feb 5 16:16:02 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 5 Feb 2018 17:16:02 +0100 Subject: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node.. In-Reply-To: References: Message-ID: On Fri, Feb 2, 2018 at 9:10 PM, Thomas Davis wrote: > Is this supported? > > I have a node, that centos 7.4 minimal is installed on, with an interface > setup for an IP address. > > I've yum installed nothing else except the ovirt-4.2.1-pre rpm, run > screen, and then do the 'hosted-engine --deploy' command. > Fine, nothing else is required. > > It hangs on: > > [ INFO ] changed: [localhost] > [ INFO ] TASK [Get ovirtmgmt route table id] > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, > "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ > print $9 }'", "delta": "0:00:00.004845", "end": "2018-02-02 > 12:03:30.794860", "rc": 0, "start": "2018-02-02 12:03:30.790015", "stderr": > "", "stderr_lines": [], "stdout": "", "stdout_lines": []} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO ] Stage: Clean up > [ INFO ] Cleaning temporary resources > [ INFO ] TASK [Gathering Facts] > [ INFO ] ok: [localhost] > [ INFO ] TASK [Remove local vm dir] > [ INFO ] ok: [localhost] > [ INFO ] Generating answer file '/var/lib/ovirt-hosted-engine- > setup/answers/answers-20180202120333.conf' > [ INFO ] Stage: Pre-termination > [ INFO ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: please check the logs for the > issue, fix accordingly or re-deploy from scratch. > Log file is located at /var/log/ovirt-hosted-engine- > setup/ovirt-hosted-engine-setup-20180202115038-r11nh1.log > > but the VM is up and running, just attached to the 192.168.122.0/24 subnet > > [root at d8-r13-c2-n1 ~]# ssh root at 192.168.122.37 > root at 192.168.122.37's password: > Last login: Fri Feb 2 11:54:47 2018 from 192.168.122.1 > [root at ovirt ~]# systemctl status ovirt-engine > ? ovirt-engine.service - oVirt Engine > Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; enabled; > vendor preset: disabled) > Active: active (running) since Fri 2018-02-02 11:54:42 PST; 11min ago > Main PID: 24724 (ovirt-engine.py) > CGroup: /system.slice/ovirt-engine.service > ??24724 /usr/bin/python /usr/share/ovirt-engine/ > services/ovirt-engine/ovirt-engine.py --redirect-output --systemd=notify > start > ??24856 ovirt-engine -server -XX:+TieredCompilation -Xms3971M > -Xmx3971M -Djava.awt.headless=true -Dsun.rmi.dgc.client.gcInterval=3600000 > -Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse... > > Feb 02 11:54:41 ovirt.crt.nersc.gov systemd[1]: Starting oVirt Engine... > Feb 02 11:54:41 ovirt.crt.nersc.gov ovirt-engine.py[24724]: 2018-02-02 > 11:54:41,767-0800 ovirt-engine: INFO _detectJBossVersion:187 Detecting > JBoss version. Running: /usr/lib/jvm/jre/...600000', '- > Feb 02 11:54:42 ovirt.crt.nersc.gov ovirt-engine.py[24724]: 2018-02-02 > 11:54:42,394-0800 ovirt-engine: INFO _detectJBossVersion:207 Return code: > 0, | stdout: '[u'WildFly Full 11.0.0....tderr: '[]' > Feb 02 11:54:42 ovirt.crt.nersc.gov systemd[1]: Started oVirt Engine. > Feb 02 11:55:25 ovirt.crt.nersc.gov python2[25640]: ansible-stat Invoked > with checksum_algorithm=sha1 get_checksum=True follow=False > path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True > Feb 02 11:55:29 ovirt.crt.nersc.gov python2[25698]: ansible-stat Invoked > with checksum_algorithm=sha1 get_checksum=True follow=False > path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True > Feb 02 11:55:30 ovirt.crt.nersc.gov python2[25741]: ansible-stat Invoked > with checksum_algorithm=sha1 get_checksum=True follow=False > path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True > Feb 02 11:55:30 ovirt.crt.nersc.gov python2[25767]: ansible-stat Invoked > with checksum_algorithm=sha1 get_checksum=True follow=False > path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True > Feb 02 11:55:31 ovirt.crt.nersc.gov python2[25795]: ansible-stat Invoked > with checksum_algorithm=sha1 get_checksum=True follow=False > path=/etc/ovirt-engine-metrics/config.yml get_md5...ributes=True > > The 'ip rule list' never has an ovirtmgmt rule/table in it.. which means > the ansible script loops then dies; vdsmd has never configured the network > on the node. > Right. Can you please attach engine.log and host-deploy from the engine VM? > > [root at d8-r13-c2-n1 ~]# systemctl status vdsmd -l > ? vdsmd.service - Virtual Desktop Server Manager > Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; vendor > preset: enabled) > Active: active (running) since Fri 2018-02-02 11:55:11 PST; 14min ago > Main PID: 7654 (vdsmd) > CGroup: /system.slice/vdsmd.service > ??7654 /usr/bin/python2 /usr/share/vdsm/vdsmd > > Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running > dummybr > Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running > tune_system > Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running > test_space > Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: Running > test_lo > Feb 02 11:55:11 d8-r13-c2-n1 systemd[1]: Started Virtual Desktop Server > Manager. > Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN File: /var/run/vdsm/trackedInterfaces/vnet0 > already removed > Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN Not ready yet, ignoring > event '|virt|VM_status|ba56a114-efb0-45e0-b2ad-808805ae93e0' > args={'ba56a114-efb0-45e0-b2ad-808805ae93e0': {'status': 'Powering up', > 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '127.0.0.1', 'type': 'vnc', > 'port': '5900'}], 'hash': '5328187475809024041', 'cpuUser': '0.00', > 'monitorResponse': '0', 'elapsedTime': '0', 'cpuSys': '0.00', 'vcpuPeriod': > 100000L, 'timeOffset': '0', 'clientIp': '', 'pauseCode': 'NOERR', > 'vcpuQuota': '-1'}} > Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available. > Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available, KSM stats > will be missing. > Feb 02 11:55:17 d8-r13-c2-n1 vdsm[7654]: WARN ping was deprecated in favor > of ping2 and confirmConnectivity > > Do I need to install a complete ovirt-engine on the node first, bring the > node into ovirt, then bring up hosted-engine? I'd like to avoid this and > just go straight to hosted-engine setup. > > thomas > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Mon Feb 5 17:54:19 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 5 Feb 2018 19:54:19 +0200 Subject: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated In-Reply-To: References: Message-ID: On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto" < lorenzetto.luca at gmail.com> wrote: Hello, i'm starting the implementation of our disaster recovery site with RHV 4.1.latest for our production environment. Our production setup is very easy, with self hosted engine on dc KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA and EMC VNX8000. Both storage arrays supports replication via their own replication protocols (SRDF, MirrorView), so we'd like to delegate to them the replication of data to the remote site, which is located on another remote datacenter. In KVMPD DC we have some storage domains that contains non critical VMs, which we don't want to replicate to remote site (in case of failure they have a low priority and will be restored from a backup). In our setup we won't replicate them, so will be not available for attachment on remote site. Can be this be an issue? Do we require to replicate everything? What about master domain? Do i require that the master storage domain stays on a replicated volume or can be any of the available ones? I've seen that since 4.1 there's an API for updating OVF_STORE disks. Do we require to invoke it with a frequency that is the compatible with the replication frequency on storage side. We set at the moment RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets updated with the required frequency? I've seen a recent presentation by Maor Lipchuk that is showing the "automagic" ansible role for disaster recovery: https://www.slideshare.net/maorlipchuk/ovirt-dr-site-tosite-using-ansible It's also related with some youtube presentations demonstrating a real DR plan execution. But what i've seen is that Maor is explicitly talking about 4.2 release. Does that role works only with >4.2 releases or can be used also on earlier (4.1) versions? Releases before 4.2 do not store complete information on the OVF store to perform such comprehensive failover. I warmly suggest 4.2! Y. I've tested a manual flow of replication + recovery through Import SD followed by Import VM and worked like a charm. Using a prebuilt ansible role will reduce my effort on creating a new automation for doing this. Anyone has experiences like mine? Thank you for the help you may provide, i'd like to contribute back to you with all my findings and with an usable tool (also integrated with storage arrays if possible). Luca (Sorry for duplicate email, ctrl-enter happened before mail completion) -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < lorenzetto.luca at gmail.com> _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Mon Feb 5 18:20:20 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Mon, 5 Feb 2018 20:20:20 +0200 Subject: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated In-Reply-To: References: Message-ID: Hi Luca, Thank you for your interst in the Disaster Recovery ansible solution, it is great to see users get familiar with it. Please see my comments inline Regards, Maor On Mon, Feb 5, 2018 at 7:54 PM, Yaniv Kaul wrote: > > > On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto" < > lorenzetto.luca at gmail.com> wrote: > > Hello, > > i'm starting the implementation of our disaster recovery site with RHV > 4.1.latest for our production environment. > > Our production setup is very easy, with self hosted engine on dc > KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our > setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA > and EMC VNX8000. Both storage arrays supports replication via their > own replication protocols (SRDF, MirrorView), so we'd like to delegate > to them the replication of data to the remote site, which is located > on another remote datacenter. > > In KVMPD DC we have some storage domains that contains non critical > VMs, which we don't want to replicate to remote site (in case of > failure they have a low priority and will be restored from a backup). > In our setup we won't replicate them, so will be not available for > attachment on remote site. Can be this be an issue? Do we require to > replicate everything? > > No, it is not required to replicate everything. If there are no disks on those storage domains that attached to your critical VMs/Templates you don't have to use them as part of yout mapping var file > What about master domain? Do i require that the master storage domain > stays on a replicated volume or can be any of the available ones? > > You can choose which storage domains you want to recover. Basically, if a storage domain is indicated as "master" in the mapping var file then it should be attached first to the Data Center. If your secondary setup already contains a master storage domain which you dont care to replicate and recover, then you can configure your mapping var file to only attach regular storage domains, simply indicate "dr_master_domain: False" in the dr_import_storages for all the storage domains. (You can contact me on IRC if you need some guidance with it) > > I've seen that since 4.1 there's an API for updating OVF_STORE disks. > Do we require to invoke it with a frequency that is the compatible > with the replication frequency on storage side. > > No, you don't have to use the update OVF_STORE disk for replication. The OVF_STORE disk is being updated every 60 minutes (The default configuration value), > We set at the moment > RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets > updated with the required frequency? > > OVF_STORE disk is being updated every 60 minutes but keep in mind that the OVF_STORE is being updated internally in the engine so it might not be synced with the RPO which you configured. If I understood correctly, then you are right by indicating that the data of the storage domain will be synced at approximatly 2 hours = RPO of 1hr + OVF_STORE update of 1hr > > I've seen a recent presentation by Maor Lipchuk that is showing the > "automagic" ansible role for disaster recovery: > > https://www.slideshare.net/maorlipchuk/ovirt-dr-site-tosite-using-ansible > > It's also related with some youtube presentations demonstrating a real > DR plan execution. > > But what i've seen is that Maor is explicitly talking about 4.2 > release. Does that role works only with >4.2 releases or can be used > also on earlier (4.1) versions? > > > Releases before 4.2 do not store complete information on the OVF store to > perform such comprehensive failover. I warmly suggest 4.2! > Y. > Indeed, We also introduced several functionalities like detach of master storage domain , and attach of "dirty" master storage domain which are depndant on the failover process, so unfortunatly to support a full recovery process you will need oVirt 4.2 env. > > I've tested a manual flow of replication + recovery through Import SD > followed by Import VM and worked like a charm. Using a prebuilt > ansible role will reduce my effort on creating a new automation for > doing this. > > Anyone has experiences like mine? > > Thank you for the help you may provide, i'd like to contribute back to > you with all my findings and with an usable tool (also integrated with > storage arrays if possible). > > Please feel free to share your comments and questions, I would very appreciate to know your user expirience. > > Luca > > (Sorry for duplicate email, ctrl-enter happened before mail completion) > > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.luca at gmail.com> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccox at endlessnow.com Mon Feb 5 20:53:36 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Mon, 5 Feb 2018 14:53:36 -0600 Subject: [ovirt-users] ovirt 3.6, we had the ovirt manager go down in a bad way and all VMs for one node marked Unknown and Not Reponding while up In-Reply-To: References: <0e21836d-c660-227b-c7a8-d57a09d40d2d@endlessnow.com> <18dab82b-40ec-e034-74fb-2c2db7372f7c@endlessnow.com> <43c20a8a-6e77-19b6-283e-6da9f3929fd5@endlessnow.com> Message-ID: <21d7b999-b96f-b619-cfa6-8205e8d6e3cc@endlessnow.com> Answering my own post... a restart of vdsmd on the affected blade has fixed everything. Thanks everyone who helped. On 02/05/2018 10:02 AM, Christopher Cox wrote: > Forgive the top post.? I guess what I need to know now is whether there > is a recovery path that doesn't lead to total loss of the VMs that are > currently in the "Unknown" "Not responding" state. > > We are planning a total oVirt shutdown.? I just would like to know if > we've effectively lot those VMs or not.? Again, the VMs are currently > "up".? And we use a file backup process, so in theory they can be > restored, just somewhat painfully, from scratch. > > But if somebody knows if we shutdown all the bad VMs and the blade, is > there someway oVirt can know the VMs are "ok" to start up??? Will > changing their state directly to "down" in the db stick if the blade is > down?? That is, will we get to a known state where the VMs can actually > be started and brought back into a known state? > > Right now, we're feeling there's a good chance we will not be able to > recover these VMs, even though they are "up" right now.? I really need > some way to force oVirt into an integral state, even if it means we take > the whole thing down. > > Possible? > > > On 01/25/2018 06:57 PM, Christopher Cox wrote: >> >> >> On 01/25/2018 04:57 PM, Douglas Landgraf wrote: >>> On Thu, Jan 25, 2018 at 5:12 PM, Christopher Cox >>> wrote: >>>> On 01/25/2018 02:25 PM, Douglas Landgraf wrote: >>>>> >>>>> On Wed, Jan 24, 2018 at 10:18 AM, Christopher Cox >>>>> >>>>> wrote: >>>>>> >>>>>> Would restarting vdsm on the node in question help fix this? >>>>>> Again, all >>>>>> the >>>>>> VMs are up on the node.? Prior attempts to fix this problem have >>>>>> left the >>>>>> node in a state where I can issue the "has been rebooted" command >>>>>> to it, >>>>>> it's confused. >>>>>> >>>>>> So... node is up.? All VMs are up.? Can't issue "has been >>>>>> rebooted" to >>>>>> the >>>>>> node, all VMs show Unknown and not responding but they are up. >>>>>> >>>>>> Chaning the status is the ovirt db to 0 works for a second and >>>>>> then it >>>>>> goes >>>>>> immediately back to 8 (which is why I'm wondering if I should restart >>>>>> vdsm >>>>>> on the node). >>>>> >>>>> >>>>> It's not recommended to change db manually. >>>>> >>>>>> >>>>>> Oddly enough, we're running all of this in production.? So, >>>>>> watching it >>>>>> all >>>>>> go down isn't the best option for us. >>>>>> >>>>>> Any advice is welcome. >>>>> >>>>> >>>>> >>>>> We would need to see the node/engine logs, have you found any error in >>>>> the vdsm.log >>>>> (from nodes) or engine.log? Could you please share the error? >>>> >>>> >>>> >>>> In short, the error is our ovirt manager lost network (our problem) and >>>> crashed hard (hardware issue on the server)..? On bring up, we had some >>>> network changes (that caused the lost network problem) so our LACP >>>> bond was >>>> down for a bit while we were trying to bring it up (noting the ovirt >>>> manager >>>> is up while we're reestablishing the network on the switch side). >>>> >>>> In other word, that's the "error" so to speak that got us to where >>>> we are. >>>> >>>> Full DEBUG enabled on the logs... The error messages seem obvious to >>>> me.. >>>> starts like this (nothing the ISO DOMAIN was coming off an NFS mount >>>> off the >>>> ovirt management server... yes... we know... we do have plans to >>>> move that). >>>> >>>> So on the hypervisor node itself, from the vdsm.log (vdsm.log.33.xz): >>>> >>>> (hopefully no surprise here) >>>> >>>> Thread-2426633::WARNING::2018-01-23 >>>> 13:50:56,672::fileSD::749::Storage.scanDomains::(collectMetaFiles) >>>> Could not >>>> collect metadata file for domain path >>>> /rhev/data-center/mnt/d0lppc129.skopos.me:_var_lib_exports_iso-20160408002844 >>>> >>>> Traceback (most recent call last): >>>> ?? File "/usr/share/vdsm/storage/fileSD.py", line 735, in >>>> collectMetaFiles >>>> ???? sd.DOMAIN_META_DATA)) >>>> ?? File "/usr/share/vdsm/storage/outOfProcess.py", line 121, in glob >>>> ???? return self._iop.glob(pattern) >>>> ?? File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", >>>> line 536, >>>> in glob >>>> ???? return self._sendCommand("glob", {"pattern": pattern}, >>>> self.timeout) >>>> ?? File "/usr/lib/python2.7/site-packages/ioprocess/__init__.py", >>>> line 421, >>>> in _sendCommand >>>> ???? raise Timeout(os.strerror(errno.ETIMEDOUT)) >>>> Timeout: Connection timed out >>>> Thread-27::ERROR::2018-01-23 >>>> 13:50:56,672::sdc::145::Storage.StorageDomainCache::(_findDomain) >>>> domain >>>> e5ecae2f-5a06-4743-9a43-e74d83992c35 not found >>>> Traceback (most recent call last): >>>> ?? File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain >>>> ???? dom = findMethod(sdUUID) >>>> ?? File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain >>>> ???? return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID)) >>>> ?? File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath >>>> ???? raise se.StorageDomainDoesNotExist(sdUUID) >>>> StorageDomainDoesNotExist: Storage domain does not exist: >>>> (u'e5ecae2f-5a06-4743-9a43-e74d83992c35',) >>>> Thread-27::ERROR::2018-01-23 >>>> 13:50:56,673::monitor::276::Storage.Monitor::(_monitorDomain) Error >>>> monitoring domain e5ecae2f-5a06-4743-9a43-e74d83992c35 >>>> Traceback (most recent call last): >>>> ?? File "/usr/share/vdsm/storage/monitor.py", line 272, in >>>> _monitorDomain >>>> ???? self._performDomainSelftest() >>>> ?? File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 769, in >>>> wrapper >>>> ???? value = meth(self, *a, **kw) >>>> ?? File "/usr/share/vdsm/storage/monitor.py", line 339, in >>>> _performDomainSelftest >>>> ???? self.domain.selftest() >>>> ?? File "/usr/share/vdsm/storage/sdc.py", line 49, in __getattr__ >>>> ???? return getattr(self.getRealDomain(), attrName) >>>> ?? File "/usr/share/vdsm/storage/sdc.py", line 52, in getRealDomain >>>> ???? return self._cache._realProduce(self._sdUUID) >>>> ?? File "/usr/share/vdsm/storage/sdc.py", line 124, in _realProduce >>>> ???? domain = self._findDomain(sdUUID) >>>> ?? File "/usr/share/vdsm/storage/sdc.py", line 143, in _findDomain >>>> ???? dom = findMethod(sdUUID) >>>> ?? File "/usr/share/vdsm/storage/nfsSD.py", line 122, in findDomain >>>> ???? return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID)) >>>> ?? File "/usr/share/vdsm/storage/nfsSD.py", line 112, in findDomainPath >>>> ???? raise se.StorageDomainDoesNotExist(sdUUID) >>>> StorageDomainDoesNotExist: Storage domain does not exist: >>>> (u'e5ecae2f-5a06-4743-9a43-e74d83992c35',) >>>> >>>> >>>> Again, all the hypervisor nodes will complain about having the NFS >>>> area for >>>> ISO DOMAIN now gone.? Remember the ovirt manager node held this and >>>> it has >>>> now network has gone out and the node crashed (note: the ovirt node >>>> (the >>>> actual server box) shouldn't crash due to the network outage, but it >>>> did. >>> >>> >>> I have added VDSM people in this thread to review it. I am assuming >>> the network changes (during the crash) still make the storage domain >>> available for the nodes. >> >> Ideally, nothing was lost node wise (neither LAN nor iSCSI), just the >> ovirt manager lost its network connection.? So the only thing, as I >> mentioned, storage wise that was lost was the ISO DOMAIN which was >> NFS'd off the ovirt manager. >> >>> >>>> >>>> So here is the engine collapse as it lost network connectivity >>>> (before the >>>> server actually crashed hard). >>>> >>>> 2018-01-23 13:45:33,666 ERROR >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-87) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VDSM d0lppn067 command failed: >>>> Heartbeat >>>> exeeded >>>> 2018-01-23 13:45:33,666 ERROR >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-10) [21574461] Correlation ID: null, >>>> Call >>>> Stack: null, Custom Event ID: -1, Message: VDSM d0lppn072 command >>>> failed: >>>> Heartbeat exeeded >>>> 2018-01-23 13:45:33,666 ERROR >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d] Correlation ID: null, >>>> Call >>>> Stack: null, Custom Event ID: -1, Message: VDSM d0lppn066 command >>>> failed: >>>> Heartbeat exeeded >>>> 2018-01-23 13:45:33,667 ERROR >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand] >>>> (DefaultQuartzScheduler_Worker-87) [] Command >>>> 'GetStatsVDSCommand(HostName = >>>> d0lppn067, VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', >>>> hostId='f99c68c8-b0e8-437b-8cd9-ebaddaaede96', >>>> vds='Host[d0lppn067,f99c68c8-b0e8-437b-8cd9-ebaddaaede96]'})' execution >>>> failed: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,667 ERROR >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand] >>>> (DefaultQuartzScheduler_Worker-10) [21574461] Command >>>> 'GetStatsVDSCommand(HostName = d0lppn072, >>>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', >>>> hostId='fdc00296-973d-4268-bd79-6dac535974e0', >>>> vds='Host[d0lppn072,fdc00296-973d-4268-bd79-6dac535974e0]'})' execution >>>> failed: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,667 ERROR >>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand] >>>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d] Command >>>> 'GetStatsVDSCommand(HostName = d0lppn066, >>>> VdsIdAndVdsVDSCommandParametersBase:{runAsync='true', >>>> hostId='14abf559-4b62-4ebd-a345-77fa9e1fa3ae', >>>> vds='Host[d0lppn066,14abf559-4b62-4ebd-a345-77fa9e1fa3ae]'})' execution >>>> failed: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,669 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-87) []? Failed getting vds stats, >>>> vds='d0lppn067'(f99c68c8-b0e8-437b-8cd9-ebaddaaede96): >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,669 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-10) [21574461]? Failed getting vds >>>> stats, >>>> vds='d0lppn072'(fdc00296-973d-4268-bd79-6dac535974e0): >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,669 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d]? Failed getting vds >>>> stats, >>>> vds='d0lppn066'(14abf559-4b62-4ebd-a345-77fa9e1fa3ae): >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,671 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-10) [21574461] Failure to refresh Vds >>>> runtime >>>> info: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,671 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d] Failure to refresh Vds >>>> runtime >>>> info: VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,671 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-87) [] Failure to refresh Vds runtime >>>> info: >>>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> 2018-01-23 13:45:33,671 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-37) [4e8ec41d] Exception: >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand.executeVdsBrokerCommand(GetStatsVDSCommand.java:21) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) >>>> >>>> [dal.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsStats(HostMonitoring.java:472) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:114) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refresh(HostMonitoring.java:84) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:227) >>>> [vdsbroker.jar:] >>>> ???????? at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown >>>> Source) >>>> [:1.8.0_102] >>>> ???????? at >>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>> >>>> [rt.jar:1.8.0_102] >>>> ???????? at java.lang.reflect.Method.invoke(Method.java:498) >>>> [rt.jar:1.8.0_102] >>>> ???????? at >>>> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81) >>>> >>>> [scheduler.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52) >>>> >>>> [scheduler.jar:] >>>> ???????? at org.quartz.core.JobRunShell.run(JobRunShell.java:213) >>>> [quartz.jar:] >>>> ???????? at >>>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) >>>> >>>> [quartz.jar:] >>>> >>>> 2018-01-23 13:45:33,671 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-10) [21574461] Exception: >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand.executeVdsBrokerCommand(GetStatsVDSCommand.java:21) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) >>>> >>>> [dal.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsStats(HostMonitoring.java:472) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsRunTimeInfo(HostMonitoring.java:114) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refresh(HostMonitoring.java:84) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.VdsManager.onTimer(VdsManager.java:227) >>>> [vdsbroker.jar:] >>>> ???????? at sun.reflect.GeneratedMethodAccessor75.invoke(Unknown >>>> Source) >>>> [:1.8.0_102] >>>> ???????? at >>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>> >>>> [rt.jar:1.8.0_102] >>>> ???????? at java.lang.reflect.Method.invoke(Method.java:498) >>>> [rt.jar:1.8.0_102] >>>> ???????? at >>>> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81) >>>> >>>> [scheduler.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52) >>>> >>>> [scheduler.jar:] >>>> ???????? at org.quartz.core.JobRunShell.run(JobRunShell.java:213) >>>> [quartz.jar:] >>>> ???????? at >>>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) >>>> >>>> [quartz.jar:] >>>> >>>> 2018-01-23 13:45:33,671 ERROR >>>> [org.ovirt.engine.core.vdsbroker.HostMonitoring] >>>> (DefaultQuartzScheduler_Worker-87) [] Exception: >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> VDSGenericException: VDSNetworkException: Heartbeat exeeded >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase.proceedProxyReturnValue(BrokerCommandBase.java:188) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetStatsVDSCommand.executeVdsBrokerCommand(GetStatsVDSCommand.java:21) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) >>>> >>>> [dal.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467) >>>> >>>> [vdsbroker.jar:] >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.HostMonitoring.refreshVdsStats(HostMonitoring.java:472) >>>> >>>> [vdsbroker.jar:] >>>> >>>> >>>> >>>> >>>> Here are the engine logs show problem with node d0lppn065, the VMs >>>> first go >>>> to "Unknown" then then "Unknown" plus "not responding": >>>> >>>> 2018-01-23 14:48:00,712 ERROR >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (org.ovirt.thread.pool-8-thread-28) [] Correlation ID: null, Call >>>> Stack: >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VDSNetworkException: >>>> org.ovirt.vdsm.jsonrpc.client.ClientConnection >>>> Exception: Connection failed >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.createNetworkException(VdsBrokerCommand.java:157) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:120) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.VDSCommandBase.executeCommand(VDSCommandBase.java:65) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.dal.VdcCommandBase.execute(VdcCommandBase.java:33) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.ResourceManager.runVdsCommand(ResourceManager.java:467) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher.fetch(VmsStatisticsFetcher.java:27) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:35) >>>> >>>> ???????? at sun.reflect.GeneratedMethodAccessor80.invoke(Unknown >>>> Source) >>>> ???????? at >>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) >>>> >>>> ???????? at java.lang.reflect.Method.invoke(Method.java:498) >>>> ???????? at >>>> org.ovirt.engine.core.utils.timer.JobWrapper.invokeMethod(JobWrapper.java:81) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:52) >>>> >>>> ???????? at org.quartz.core.JobRunShell.run(JobRunShell.java:213) >>>> ???????? at >>>> org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557) >>>> >>>> Caused by: org.ovirt.vdsm.jsonrpc.client.ClientConnectionException: >>>> Connection failed >>>> ???????? at >>>> org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient.connect(ReactorClient.java:155) >>>> >>>> ???????? at >>>> org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.getClient(JsonRpcClient.java:134) >>>> >>>> ???????? at >>>> org.ovirt.vdsm.jsonrpc.client.JsonRpcClient.call(JsonRpcClient.java:81) >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.jsonrpc.FutureMap.(FutureMap.java:70) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.jsonrpc.JsonRpcVdsServer.getAllVmStats(JsonRpcVdsServer.java:331) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.GetAllVmStatsVDSCommand.executeVdsBrokerCommand(GetAllVmStatsVDSCommand.java:20) >>>> >>>> ???????? at >>>> org.ovirt.engine.core.vdsbroker.vdsbroker.VdsBrokerCommand.executeVDSCommand(VdsBrokerCommand.java:110) >>>> >>>> ???????? ... 12 more >>>> , Custom Event ID: -1, Message: Host d0lppn065 is non responsive. >>>> 2018-01-23 14:48:00,713 INFO >>>> [org.ovirt.engine.core.bll.VdsEventListener] >>>> (org.ovirt.thread.pool-8-thread-1) [] ResourceManager::vdsNotResponding >>>> entered for Host '2797cae7-6886-4898-a5e4-23361ce03a90', '10.32.0.65' >>>> 2018-01-23 14:48:00,713 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (org.ovirt.thread.pool-8-thread-36) [] Correlation ID: null, Call >>>> Stack: >>>> null, Custom Event ID: -1, Message: VM vtop3 was set to the Unknown >>>> status. >>>> >>>> ...etc... (sorry about the wraps below) >>>> >>>> 2018-01-23 14:59:07,817 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '30f7af86-c2b9-41c3-b2c5-49f5bbdd0e27'(d0lpvd070) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:07,819 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmsStatisticsFetcher] >>>> (DefaultQuartzScheduler_Worker-74) [] Fetched 15 VMs from VDS >>>> '8cb119c5-b7f0-48a3-970a-205d96b2e940' >>>> 2018-01-23 14:59:07,936 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lpvd070 is not responding. >>>> 2018-01-23 14:59:07,939 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> 'ebc5bb82-b985-451b-8313-827b5f40eaf3'(d0lpvd039) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,032 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lpvd039 is not responding. >>>> 2018-01-23 14:59:08,038 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '494c4f9e-1616-476a-8f66-a26a96b76e56'(vtop3) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,134 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM vtop3 is not responding. >>>> 2018-01-23 14:59:08,136 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> 'eaeaf73c-d9e2-426e-a2f2-7fcf085137b0'(d0lpvw059) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,237 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lpvw059 is not responding. >>>> 2018-01-23 14:59:08,239 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '8308a547-37a1-4163-8170-f89b6dc85ba8'(d0lpvm058) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,326 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lpvm058 is not responding. >>>> 2018-01-23 14:59:08,328 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '3d544926-3326-44e1-8b2a-ec632f51112a'(d0lqva056) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,400 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lqva056 is not responding. >>>> 2018-01-23 14:59:08,402 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '989e5a17-789d-4eba-8a5e-f74846128842'(d0lpva078) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,472 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lpva078 is not responding. >>>> 2018-01-23 14:59:08,474 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '050a71c1-9e65-43c6-bdb2-18eba571e2eb'(d0lpvw077) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,545 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lpvw077 is not responding. >>>> 2018-01-23 14:59:08,547 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> 'c3b497fd-6181-4dd1-9acf-8e32f981f769'(d0lpva079) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,621 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lpva079 is not responding. >>>> 2018-01-23 14:59:08,623 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '7cd22b39-feb1-4c6e-8643-ac8fb0578842'(d0lqva034) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,690 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lqva034 is not responding. >>>> 2018-01-23 14:59:08,692 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '2ab9b1d8-d1e8-4071-a47c-294e586d2fb6'(d0lpvd038) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,763 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lpvd038 is not responding. >>>> 2018-01-23 14:59:08,768 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> 'ecb4e795-9eeb-4cdc-a356-c1b9b32af5aa'(d0lqva031) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,836 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lqva031 is not responding. >>>> 2018-01-23 14:59:08,838 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '1a361727-1607-43d9-bd22-34d45b386d3e'(d0lqva033) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,911 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM d0lqva033 is not responding. >>>> 2018-01-23 14:59:08,913 INFO >>>> [org.ovirt.engine.core.vdsbroker.VmAnalyzer] >>>> (DefaultQuartzScheduler_Worker-75) [] VM >>>> '0cd65f90-719e-429e-a845-f425612d7b14'(vtop4) moved from 'Up' --> >>>> 'NotResponding' >>>> 2018-01-23 14:59:08,984 WARN >>>> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >>>> (DefaultQuartzScheduler_Worker-75) [] Correlation ID: null, Call Stack: >>>> null, Custom Event ID: -1, Message: VM vtop4 is not responding. >>>> >>>>> >>>>> Probably it's time to think to upgrade your environment from 3.6. >>>> >>>> >>>> I know.? But from a production standpoint mid-2016 wasn't that long >>>> ago. >>>> And 4 was just coming out of beta at the time. >>>> >>>> We were upgrading from 3.4 to 3.6.? And it took a long time (again, >>>> because >>>> it's all "live").? Trust me, the move to 4.0 was discussed, it was >>>> just a >>>> timing thing. >>>> >>>> With that said, I do "hear you"....and certainly it's being >>>> discussed. We >>>> just don't see a "good" migration path... we see a slow path (moving >>>> nodes >>>> out, upgrading, etc.) and knowing that as with all things, nobody can >>>> guarantee "success", which would be a very bad thing.? So going from >>>> working >>>> 3.6 to totally (potential) broken 4.2, isn't going to impress anyone >>>> here, >>>> you know?? If all goes according to our best guesses, then great, >>>> but when >>>> things go bad, and the chance is not insignificant, well... I'm just >>>> not >>>> quite prepared with my r?sum? if you know what I mean. >>>> >>>> Don't get me wrong, our move from 3.4 to 3.6 had some similar risks, >>>> but we >>>> also migrated to whole new infrastructure, a luxury we will not have >>>> this >>>> time.? And somehow 3.4 to 3.6 doesn't sound as risky as 3.6 to 4.2. >>> >>> I see your concern. However,? keep your system updated with recent >>> software is something I would recommend. You could setup a parallel >>> 4.2 env and move the VMS slowly from 3.6. >> >> Understood.? But would people want software that changes so quickly? >> This isn't like moving from RH 7.2 to 7.3 in a matter of months, it's >> more like moving from major release to major release in a matter of >> months and doing again potentially in a matter of months.? Granted >> we're running oVirt and not RHV, so maybe we should be on the Fedora >> style upgrade plan.? Just not conducive to an enterprise environment >> (oVirt people, stop laughing). >> >>> >>>> >>>> Is there a path from oVirt to RHEV?? Every bit of help we get helps >>>> us in >>>> making that decision as well, which I think would be a very good >>>> thing for >>>> both of us. (I inherited all this oVirt and I was the "guy" doing >>>> the 3.4 to >>>> 3.6 with the all new infrastructure). >>> >>> Yes, you can import your setup to RHEV. >> >> Good to know. Because of the fragility (support wise... I'm mean our >> oVirt has been rock solid, apart from rare glitches like this), we may >> follow this path. >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From maozza at gmail.com Mon Feb 5 21:18:15 2018 From: maozza at gmail.com (maoz zadok) Date: Mon, 5 Feb 2018 23:18:15 +0200 Subject: [ovirt-users] guest ip address not shown on the engine panel - version 4.2.0 Message-ID: Hi All, Is it possible that the "IP addresses" of the guest virtual machine will be shown? it currently empty. [image: Inline image 1] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46919 bytes Desc: not available URL: From dyasny at gmail.com Mon Feb 5 21:21:46 2018 From: dyasny at gmail.com (Dan Yasny) Date: Mon, 5 Feb 2018 16:21:46 -0500 Subject: [ovirt-users] guest ip address not shown on the engine panel - version 4.2.0 In-Reply-To: References: Message-ID: Do you have the guest agent installed? On Mon, Feb 5, 2018 at 4:18 PM, maoz zadok wrote: > Hi All, > Is it possible that the "IP addresses" of the guest virtual machine will > be shown? it currently empty. > > [image: Inline image 1] > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 46919 bytes Desc: not available URL: From lorenzetto.luca at gmail.com Mon Feb 5 21:57:58 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Mon, 5 Feb 2018 22:57:58 +0100 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: <20180202115221.GA2787@redhat.com> References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> <20180202115221.GA2787@redhat.com> Message-ID: On Fri, Feb 2, 2018 at 12:52 PM, Richard W.M. Jones wrote: > There is a section about this in the virt-v2v man page. I'm on > a train at the moment but you should be able to find it. Try to > run many conversions, at least 4 or 8 would be good places to start. Hello Richard, read the man but found nothing explicit about resource usage. Anyway, digging on our setup i found out that vcenter when on low cpu usage is 95%. I think our windows admins should take care of this. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From rjones at redhat.com Mon Feb 5 22:13:30 2018 From: rjones at redhat.com (Richard W.M. Jones) Date: Mon, 5 Feb 2018 22:13:30 +0000 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> <20180202115221.GA2787@redhat.com> Message-ID: <20180205221330.GR2787@redhat.com> On Mon, Feb 05, 2018 at 10:57:58PM +0100, Luca 'remix_tj' Lorenzetto wrote: > On Fri, Feb 2, 2018 at 12:52 PM, Richard W.M. Jones wrote: > > There is a section about this in the virt-v2v man page. I'm on > > a train at the moment but you should be able to find it. Try to > > run many conversions, at least 4 or 8 would be good places to start. > > Hello Richard, > > read the man but found nothing explicit about resource usage. Anyway, > digging on our setup i found out that vcenter when on low cpu usage is > 95%. > I think our windows admins should take care of this. http://libguestfs.org/virt-v2v.1.html#vmware-vcenter-resources You should be able to run multiple conversions in parallel to improve throughput. The only long-term solution is to use a different method such as VMX over SSH. vCenter is just fundamentally bad. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com Fedora Windows cross-compiler. Compile Windows programs, test, and build Windows installers. Over 100 libraries supported. http://fedoraproject.org/wiki/MinGW From ehaas at redhat.com Tue Feb 6 07:28:02 2018 From: ehaas at redhat.com (Edward Haas) Date: Tue, 6 Feb 2018 09:28:02 +0200 Subject: [ovirt-users] NetworkManager with oVirt version 4.2.0 In-Reply-To: References: Message-ID: On Sun, Feb 4, 2018 at 10:01 PM, Vincent Royer wrote: > I had these types of issues as well my first time around, and after a > failed engine install I haven't been able to get things cleaned up, so I > will have to start over. I created a bonded interface on the host before > the engine setup. but once I created my first VM and assigned bond0 to it, > the engine became inaccessible the moment the VM got an IP from the > router. > Please clarify what does it mean to "assign bond0 to it". A vnic can be defined on a network (using vnic profiles). If your Engine is inaccessible, try to understand what changed in the network, perhaps something collided (duplicate IP/s, routes, mac/s etc). > What is the preferred way to setup bonded interfaces? In Cockpit or nmcli > before hosted engine setup? Or proceed with only one interface then add > the other in engine? > All should work. > > Is it possible, for example, to setup bonded interfaces with a static > management IP on vlan 50 to access the engine, and let the other VMs grab > DHCP IPs on vlan 10? > Sure it is, one is the management (vlan 50) network and the other a VM network (vlan 10). > > > On Feb 3, 2018 11:31 PM, "Edward Haas" wrote: > > > > On Sat, Feb 3, 2018 at 9:06 AM, maoz zadok wrote: > >> Hello All, >> I'm new to oVirt, I'm trying with no success to set up the networking on >> an oVirt 4.2.0 node, and I think I'm missing something. >> >> background: >> interfaces em1-4 is bonded to bond0 >> VLAN configured on bond0.1 >> and bridged to ovirtmgmt for the management interface. >> >> I'm not sure its updated to version 4.2.0 but I followed this post: >> https://www.ovirt.org/documentation/how-to/networking/bondin >> g-vlan-bridge/ >> > > It looks like an old howto, we will need to update or remove it. > > >> >> with this setting, the NetworkManager keep starting up on reboot, >> and the interfaces are not managed by oVirt (and the nice traffic graphs >> are not shown). >> > > For the interfaces to be owned by oVirt, you will need to add the host to > Engine. > So I would just configure everything up to the VLAN (slaves, bond, VLAN) > with NetworkManager prior to adding it to Engine. The bridge should be > created when you add the host. > (assuming the VLAN you mentioned is your management interface and its ip > is the one used by Engine) > > >> >> >> >> >> my question: >> Is NetworkManager need to be disabled as in the above post? >> > > No (for 4.1 and 4.2) > > Do I need to manage the networking using (nmtui) NetworkManager? >> > > You better use cockpit or nmcli to configure the node before you add it to > Engine. > > >> >> Thanks! >> Maoz >> >> >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehaas at redhat.com Tue Feb 6 07:36:16 2018 From: ehaas at redhat.com (Edward Haas) Date: Tue, 6 Feb 2018 09:36:16 +0200 Subject: [ovirt-users] ovirt and gateway behavior In-Reply-To: References: Message-ID: Hi Alex, Please provide Engine logs from when this is occurring and mention the date/time we should focus at. Thanks, Edy. On Mon, Feb 5, 2018 at 2:19 PM, Alex K wrote: > Hi all, > > I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The > cluster is used to host several VMs. > I have observed that when gateway is lost (say the gateway device is down) > the ovirt cluster goes down. > > It seems a bit extreme behavior especially when one does not care if the > hosted VMs have connectivity to Internet or not. > > Can this behavior be disabled? > > Thanx, > Alex > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Feb 6 08:17:19 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 6 Feb 2018 10:17:19 +0200 Subject: [ovirt-users] ovirt 4.2 vdsclient Message-ID: Hi all, I have a stuck snapshot removal from a VM which is blocking the VM to start. In ovirt 4.1 I was able to cancel the stuck task by running within SPM host: vdsClient -s 0 getAllTasksStatuses vdsClient -s 0 stopTask Is there a similar way to do at ovirt 4.2? Thanx, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Feb 6 08:20:54 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 6 Feb 2018 10:20:54 +0200 Subject: [ovirt-users] ovirt and gateway behavior In-Reply-To: References: Message-ID: Hi Edward, So this is not an expected behavior? I will collect logs as soon as I reproduce it. Thanx, Alex On Tue, Feb 6, 2018 at 9:36 AM, Edward Haas wrote: > Hi Alex, > > Please provide Engine logs from when this is occurring and mention the > date/time we should focus at. > > Thanks, > Edy. > > > On Mon, Feb 5, 2018 at 2:19 PM, Alex K wrote: > >> Hi all, >> >> I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The >> cluster is used to host several VMs. >> I have observed that when gateway is lost (say the gateway device is >> down) the ovirt cluster goes down. >> >> It seems a bit extreme behavior especially when one does not care if the >> hosted VMs have connectivity to Internet or not. >> >> Can this behavior be disabled? >> >> Thanx, >> Alex >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzlotnik at redhat.com Tue Feb 6 08:24:14 2018 From: bzlotnik at redhat.com (Benny Zlotnik) Date: Tue, 6 Feb 2018 10:24:14 +0200 Subject: [ovirt-users] ovirt 4.2 vdsclient In-Reply-To: References: Message-ID: It was replaced by vdsm-client[1] [1] - https://www.ovirt.org/develop/developer-guide/vdsm/vdsm-client/ On Tue, Feb 6, 2018 at 10:17 AM, Alex K wrote: > Hi all, > > I have a stuck snapshot removal from a VM which is blocking the VM to > start. > In ovirt 4.1 I was able to cancel the stuck task by running within SPM > host: > > vdsClient -s 0 getAllTasksStatuses > vdsClient -s 0 stopTask > > Is there a similar way to do at ovirt 4.2? > > Thanx, > Alex > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Tue Feb 6 08:25:42 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Tue, 06 Feb 2018 08:25:42 +0000 Subject: [ovirt-users] Failed upgrade from 4.1.9 to 4.2.x In-Reply-To: References: <29f119b821e5192bc086d5f0ac1b3ffd@devels.es> <7a437bf424f3224e68d6fd81d9818aec@devels.es> Message-ID: El 2018-02-05 14:48, Martin Perina escribi?: > On Mon, Feb 5, 2018 at 3:08 PM, wrote: > >> El 2018-02-05 14:03, Simone Tiraboschi escribi?: >> On Mon, Feb 5, 2018 at 2:46 PM, wrote: >> >> Hi, >> >> We're trying to upgrade from 4.1.9 to 4.2.x and we're bumping into >> an error we don't know how to solve. As per [1] we run the >> 'engine-setup' command and it fails with: >> >> [ INFO? ] Rolling back to the previous PostgreSQL instance >> (postgresql). >> [ ERROR ] Failed to execute stage 'Misc configuration': Command >> '/opt/rh/rh-postgresql95/root/usr/bin/postgresql-setup' failed to >> execute >> [ INFO? ] Yum Performing yum transaction rollback >> [ INFO? ] Stage: Clean up >> ? ? ? ? ? Log file is located at >> >> > /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log >> [ INFO? ] Generating answer file >> '/var/lib/ovirt-engine/setup/answers/20180205133354-setup.co [1] >> [1]nf' >> [ INFO? ] Stage: Pre-termination >> [ INFO? ] Stage: Termination >> [ ERROR ] Execution of setup failed >> >> As of the >> >> > /var/log/ovirt-engine/setup/ovirt-engine-setup-20180205133116-sm2xd1.log >> file I could see this: >> >> ?* upgrading from 'postgresql.service' to >> 'rh-postgresql95-postgresql.se [2] [2]rvice' >> ?* Upgrading database. >> ERROR: pg_upgrade tool failed >> ERROR: Upgrade failed. >> ?* See /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log for >> details. >> >> And this file contains this information: >> >> ? Performing Consistency Checks >> ? ----------------------------- >> ? Checking cluster versions? ? ? ? ? ? ? ? ? ? ? ? ? >> ? ? ? ? ?ok >> ? Checking database user is the install user? ? ? ? ? ? ? >> ? ? ok >> ? Checking database connection settings? ? ? ? ? ? ? ? ? >> ? ? ?ok >> ? Checking for prepared transactions? ? ? ? ? ? ? ? ? ? >> ? ? ? ok >> ? Checking for reg* system OID user data types? ? ? ? ? ? ? >> ? ok >> ? Checking for contrib/isn with bigint-passing mismatch? ? ? >> ?ok >> ? Checking for invalid "line" user columns? ? ? ? ? ? ? ? >> ? ? ok >> ? Creating dump of global objects? ? ? ? ? ? ? ? ? ? ? >> ? ? ? ?ok >> ? Creating dump of database schemas >> ? ? django >> ? ? engine >> ? ? ovirt_engine_history >> ? ? postgres >> ? ? template1 >> ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >> ? ? ? ? ? ? ? ? ok >> ? Checking for presence of required libraries? ? ? ? ? ? ? >> ? ?fatal >> >> ? Your installation references loadable libraries that are missing >> from the >> ? new installation.? You can add these libraries to the new >> installation, >> ? or remove the functions using them from the old installation.? >> A list of >> ? problem libraries is in the file: >> ? loadable_libraries.txt >> >> ? Failure, exiting >> >> I'm attaching full logs FWIW. Also, I'd like to mention that we >> created two custom triggers on the engine's 'users' table, but as I >> understand from the error this is not the issue (We upgraded >> several >> times within the same minor and we had no issues with that). >> >> Could someone shed some light on this error and how to debug it? >> >> Hi, >> can you please attach also loadable_libraries.txt ? >> ? > > Could not load library "$libdir/plpython2" > ERROR:? could not access file "$libdir/plpython2": No such file or > directory > > ?Hmm, you probably need to install > rh-postgresql95-postgresql-plpython package. This is not installed by > default with oVirt as we don't use it > ?? Indeed, this made it. Thank you very much. > >> Well, definitely it has to do with the triggers... The trigger uses >> plpython2u to replicate some entries in a different database. Is >> there a way I can get rid of this error other than disabling >> plpython2 before upgrading and re-enabling it after the upgrade? >> >> Thanks. >> >> Thanks. >> >> ? [1]: https://www.ovirt.org/release/4.2.0/ [3] [3] >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users [4] [4] >> >> Links: >> ------ >> [1] http://20180205133354-setup.co [1] >> [2] http://rh-postgresql95-postgresql.se [2] >> [3] https://www.ovirt.org/release/4.2.0/ [3] >> [4] http://lists.ovirt.org/mailman/listinfo/users [4] > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users [4] > > -- > > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > > > Links: > ------ > [1] http://20180205133354-setup.co > [2] http://rh-postgresql95-postgresql.se > [3] https://www.ovirt.org/release/4.2.0/ > [4] http://lists.ovirt.org/mailman/listinfo/users From rightkicktech at gmail.com Tue Feb 6 08:36:50 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 6 Feb 2018 10:36:50 +0200 Subject: [ovirt-users] ovirt 4.2 vdsclient In-Reply-To: References: Message-ID: Hi Benny, I was trying to do it with vdsm-client without success. vdsm-client Task -h usage: vdsm-client Task [-h] method [arg=value] ... optional arguments: -h, --help show this help message and exit Task methods: method [arg=value] getInfo Get information about a Task. getStatus Get Task status information. revert Rollback a Task to restore the previous system state. clear Discard information about a finished Task. stop Stop a currently running Task. [root at v0 common]# vdsm-client Task getInfo vdsm-client: Command Task.getInfo with args {} failed: (code=-32603, message=Internal JSON-RPC error: {'reason': '__init__() takes exactly 2 arguments (1 given)'}) [root at v0 common]# vdsm-client Task getStatus vdsm-client: Command Task.getStatus with args {} failed: (code=-32603, message=Internal JSON-RPC error: {'reason': '__init__() takes exactly 2 arguments (1 given)'}) What other arguments does this expect. When using Host namespace I am able to run the available options. Thanx, Alex On Tue, Feb 6, 2018 at 10:24 AM, Benny Zlotnik wrote: > It was replaced by vdsm-client[1] > > [1] - https://www.ovirt.org/develop/developer-guide/vdsm/vdsm-client/ > > On Tue, Feb 6, 2018 at 10:17 AM, Alex K wrote: > >> Hi all, >> >> I have a stuck snapshot removal from a VM which is blocking the VM to >> start. >> In ovirt 4.1 I was able to cancel the stuck task by running within SPM >> host: >> >> vdsClient -s 0 getAllTasksStatuses >> vdsClient -s 0 stopTask >> >> Is there a similar way to do at ovirt 4.2? >> >> Thanx, >> Alex >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Tue Feb 6 08:40:14 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 6 Feb 2018 10:40:14 +0200 Subject: [ovirt-users] ovirt and gateway behavior In-Reply-To: References: Message-ID: On Feb 5, 2018 2:21 PM, "Alex K" wrote: Hi all, I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The cluster is used to host several VMs. I have observed that when gateway is lost (say the gateway device is down) the ovirt cluster goes down. Is the cluster down, or just the self-hosted engine? It seems a bit extreme behavior especially when one does not care if the hosted VMs have connectivity to Internet or not. Are the VMs down? The hosts? Y. Can this behavior be disabled? Thanx, Alex _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Tue Feb 6 09:05:29 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Tue, 6 Feb 2018 10:05:29 +0100 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: <20180205221330.GR2787@redhat.com> References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> <20180202115221.GA2787@redhat.com> <20180205221330.GR2787@redhat.com> Message-ID: On Mon, Feb 5, 2018 at 11:13 PM, Richard W.M. Jones wrote: > http://libguestfs.org/virt-v2v.1.html#vmware-vcenter-resources > > You should be able to run multiple conversions in parallel > to improve throughput. > > The only long-term solution is to use a different method such as VMX > over SSH. vCenter is just fundamentally bad. 4 conversions in parallel works, but each one is very slow. But i think i've to blame vcenter cpu which is stuck at 100%. Thank you for the directions and suggestions, Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From nicolas at ecarnot.net Tue Feb 6 09:01:08 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Tue, 6 Feb 2018 10:01:08 +0100 Subject: [ovirt-users] qemu-kvm images corruption In-Reply-To: <09a59077-c7d4-920b-5426-e02c73aad999@ecarnot.net> References: <09a59077-c7d4-920b-5426-e02c73aad999@ecarnot.net> Message-ID: Hello, On our two 3.6 DCs, we're still facing qcow2 corruptions, even on freshly installed VMs (CentOS7, win2012, win2008...). (We are still hoping to find some time to migrate all this to 4.2, but it's a big work and our one-person team - me - is overwhelmed.) My workaround is described in my previous thread below, but it's just a workaround. Reading further, I found that : https://forum.proxmox.com/threads/qcow2-corruption-after-snapshot-or-heavy-disk-i-o.32865/page-2 There are many things I don't know or understand, and I'd like your opinion : - Is "virtio" is synonym of "virtio-blk"? - Is it true that the development of virtio-scsi is active and the one of virtio is stopped? - People in the proxmox forum seem to say that no qcow2 corruption occurs when using IDE (not an option for me) neither virtio-scsi. Does any Redhat people ever heard of this? - Is converting all my VMs to use virtio-scsi a guarantee against further corruptions? - What is the non-official but nonetheless recommended driver oVirt devs recommend in the sense of future, development and stability? Regards, -- Nicolas ECARNOT Le 15/09/2017 ? 14:06, Nicolas Ecarnot a ?crit?: > TL;DR: > How to avoid images corruption? > > > Hello, > > On two of our old 3.6 DC, a recent series of VM migrations lead to some > issues : > - I'm putting a host into maintenance mode > - most of the VM are migrating nicely > - one remaining VM never migrates, and the logs are showing : > > * engine.log : "...VM has been paused due to I/O error..." > * vdsm.log : "...Improbable extension request for volume..." > > After digging amongst the RH BZ tickets, I saved the day by : > - stopping the VM > - lvchange -ay the adequate /dev/... > - qemu-img check [-r all] /rhev/blahblah > - lvchange -an... > - boot the VM > - enjoy! > > Yesterday this worked for a VM where only one error occurred on the qemu > image, and the repair was easily done by qemu-img. > > Today, facing the same issue on another VM, it failed because the errors > were very numerous, and also because of this message : > > [...] > Rebuilding refcount structure > ERROR writing refblock: No space left on device > qemu-img: Check failed: No space left on device > [...] > > The PV/VG/LV are far from being full, so I guess I don't where to look at. > I tried many ways to solve it but I'm not comfortable at all with qemu > images, corruption and solving, so I ended up exporting this VM (to an > NFS export domain), importing it into another DC : this had the side > effect to use qemu-img convert from qcow2 to qcow2, and (maybe?????) to > solve some errors??? > I also copied it into another qcow2 file with the same qemu-img convert > way, but it is leading to another clean qcow2 image without errors. > > I saw that on 4.x some bugs are fixed about VM migrations, but this is > not the point here. > I checked my SANs, my network layers, my blades, the OS (CentOS 7.2) of > my hosts, but I see nothing special. > > The real reason behind my message is not to know how to repair anything, > rather than to understand what could have lead to this situation? > Where to keep a keen eye? > -- Nicolas ECARNOT From emayoral at arsys.es Tue Feb 6 09:20:55 2018 From: emayoral at arsys.es (Eduardo Mayoral) Date: Tue, 6 Feb 2018 10:20:55 +0100 Subject: [ovirt-users] oVirt 4.2 , VM stuck in "Migrating from" state. Message-ID: Hi, ??? Got a problem with oVirt 4.2 While putting a Host in maintenance mode, an VM has failed to migrate. The end state is that the Web UI shows the VM as "Migrating from". The VM is not running in any Host in the cluster. This is the relevant message in the /var/log/ovirt-engine/engine.log 2018-02-06 09:09:05,379Z INFO? [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] VM 'ab158ff3-a716-4655-9269-11738cd53b05'(repositorionuget) is running in db and not running on VDS '82b49615-9c65-4d8e-80e0-f10089cb4225'(llkh456.arsyslan.es) 2018-02-06 09:09:05,381Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Failed during monitoring vm: ab158ff3-a716-4655-9269-11738cd53b05 , error is: {}: java.lang.NullPointerException ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.auditVmMigrationAbort(VmAnalyzer.java:440) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.abortVmMigration(VmAnalyzer.java:432) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.proceedDisappearedVm(VmAnalyzer.java:794) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.analyze(VmAnalyzer.java:135) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.lambda$analyzeVms$1(VmsMonitoring.java:136) [vdsbroker.jar:] ??????? at java.util.ArrayList.forEach(ArrayList.java:1255) [rt.jar:1.8.0_151] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.analyzeVms(VmsMonitoring.java:131) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.perform(VmsMonitoring.java:94) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:43) [vdsbroker.jar:] ??????? at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_151] ??????? at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_151] ??????? at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent-1.0.jar:] ??????? at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent-1.0.jar:] ??????? at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_151] ??????? at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_151] ??????? at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151] ??????? at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:] ??????? at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) 2018-02-06 09:09:05,381Z ERROR [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Exception:: java.lang.NullPointerException ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.auditVmMigrationAbort(VmAnalyzer.java:440) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.abortVmMigration(VmAnalyzer.java:432) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.proceedDisappearedVm(VmAnalyzer.java:794) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.analyze(VmAnalyzer.java:135) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.lambda$analyzeVms$1(VmsMonitoring.java:136) [vdsbroker.jar:] ??????? at java.util.ArrayList.forEach(ArrayList.java:1255) [rt.jar:1.8.0_151] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.analyzeVms(VmsMonitoring.java:131) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.perform(VmsMonitoring.java:94) [vdsbroker.jar:] ??????? at org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:43) [vdsbroker.jar:] ??????? at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_151] ??????? at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) [rt.jar:1.8.0_151] ??????? at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) [javax.enterprise.concurrent-1.0.jar:] ??????? at org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) [javax.enterprise.concurrent-1.0.jar:] ??????? at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_151] ??????? at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_151] ??????? at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151] ??????? at org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) [javax.enterprise.concurrent-1.0.jar:] ??????? at org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) I already tried canceling the migration, powering off the VM, restarting the engine service and restarting the vdsm on the host which is supposed to have that VM. No success so far. unlock_entity.sh shows no locked entities. Can somebody help on how to recover from this? Thanks! -- Eduardo Mayoral Jimeno (emayoral at arsys.es) Administrador de sistemas. Departamento de Plataformas. Arsys internet. +34 941 620 145 ext. 5153 From lorenzetto.luca at gmail.com Tue Feb 6 09:32:38 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Tue, 6 Feb 2018 10:32:38 +0100 Subject: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated In-Reply-To: References: Message-ID: On Mon, Feb 5, 2018 at 7:20 PM, Maor Lipchuk wrote: > Hi Luca, > > Thank you for your interst in the Disaster Recovery ansible solution, it is > great to see users get familiar with it. > Please see my comments inline > > Regards, > Maor > > On Mon, Feb 5, 2018 at 7:54 PM, Yaniv Kaul wrote: >> >> >> >> On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto" >> wrote: >> >> Hello, >> >> i'm starting the implementation of our disaster recovery site with RHV >> 4.1.latest for our production environment. >> >> Our production setup is very easy, with self hosted engine on dc >> KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our >> setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA >> and EMC VNX8000. Both storage arrays supports replication via their >> own replication protocols (SRDF, MirrorView), so we'd like to delegate >> to them the replication of data to the remote site, which is located >> on another remote datacenter. >> >> In KVMPD DC we have some storage domains that contains non critical >> VMs, which we don't want to replicate to remote site (in case of >> failure they have a low priority and will be restored from a backup). >> In our setup we won't replicate them, so will be not available for >> attachment on remote site. Can be this be an issue? Do we require to >> replicate everything? > > > No, it is not required to replicate everything. > If there are no disks on those storage domains that attached to your > critical VMs/Templates you don't have to use them as part of yout mapping > var file > Excellent. >> >> What about master domain? Do i require that the master storage domain >> stays on a replicated volume or can be any of the available ones? > > > > You can choose which storage domains you want to recover. > Basically, if a storage domain is indicated as "master" in the mapping var > file then it should be attached first to the Data Center. > If your secondary setup already contains a master storage domain which you > dont care to replicate and recover, then you can configure your mapping var > file to only attach regular storage domains, simply indicate > "dr_master_domain: False" in the dr_import_storages for all the storage > domains. (You can contact me on IRC if you need some guidance with it) > Good, that's my case. I don't need a new master domain on remote side, because is an already up and running setup where i want to attach replicated storage and run the critical VMs. >> >> >> I've seen that since 4.1 there's an API for updating OVF_STORE disks. >> Do we require to invoke it with a frequency that is the compatible >> with the replication frequency on storage side. > > > > No, you don't have to use the update OVF_STORE disk for replication. > The OVF_STORE disk is being updated every 60 minutes (The default > configuration value), > What i need is that informations about vms is replicated to the remote site with disk. In an older test i had the issue that disks were replicated to remote site, but vm configuration not! I've found disks in the "Disk" tab of storage domain, but nothing on VM Import. >> >> We set at the moment >> RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets >> updated with the required frequency? > > > > OVF_STORE disk is being updated every 60 minutes but keep in mind that the > OVF_STORE is being updated internally in the engine so it might not be > synced with the RPO which you configured. > If I understood correctly, then you are right by indicating that the data of > the storage domain will be synced at approximatly 2 hours = RPO of 1hr + > OVF_STORE update of 1hr > We require that we can recover vms with a status that is up to 2 hours ago. In worst case, from what you say, i think we'll be able to. [cut] > > Indeed, > We also introduced several functionalities like detach of master storage > domain , and attach of "dirty" master storage domain which are depndant on > the failover process, so unfortunatly to support a full recovery process you > will need oVirt 4.2 env. > Ok, but if i keep master storage domain on a non replicate volume, do i require this function? I have to admit that i require, for subscription and support requirements, to use RHV over oVirt. I've seen 4.2 is coming also from that side, and we'll upgrade for sure when available. [cut] > > > Please feel free to share your comments and questions, I would very > appreciate to know your user expirience. Sure, i'll do! And i'll bother you on irc if i need some guidance :-) Thank you so much, Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From ykaul at redhat.com Tue Feb 6 09:35:28 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 6 Feb 2018 11:35:28 +0200 Subject: [ovirt-users] qemu-kvm images corruption In-Reply-To: References: <09a59077-c7d4-920b-5426-e02c73aad999@ecarnot.net> Message-ID: On Feb 6, 2018 11:09 AM, "Nicolas Ecarnot" wrote: Hello, On our two 3.6 DCs, we're still facing qcow2 corruptions, even on freshly installed VMs (CentOS7, win2012, win2008...). Please provide complete information on the issue. When, how often, which storage, etc. (We are still hoping to find some time to migrate all this to 4.2, but it's a big work and our one-person team - me - is overwhelmed.) Understood. Note that we have some scripts that can assist somewhat. My workaround is described in my previous thread below, but it's just a workaround. Reading further, I found that : https://forum.proxmox.com/threads/qcow2-corruption-after- snapshot-or-heavy-disk-i-o.32865/page-2 There are many things I don't know or understand, and I'd like your opinion : - Is "virtio" is synonym of "virtio-blk"? Yes. - Is it true that the development of virtio-scsi is active and the one of virtio is stopped? No. - People in the proxmox forum seem to say that no qcow2 corruption occurs when using IDE (not an option for me) neither virtio-scsi. Anecdotal evidence or properly reproduced? Have they filed an issue? Does any Redhat people ever heard of this? I'm not aware of an existing corruption issue. - Is converting all my VMs to use virtio-scsi a guarantee against further corruptions? No. - What is the non-official but nonetheless recommended driver oVirt devs recommend in the sense of future, development and stability? Depends. I like virtio-scsi for its features (DISCARD mainly), but in some workloads virtio-blk may be somewhat faster (supposedly lower overhead). Both interfaces are stable. We should focus on properly reporting the issue so the qemu folks can look at this. Y. Regards, -- Nicolas ECARNOT Le 15/09/2017 ? 14:06, Nicolas Ecarnot a ?crit : > TL;DR: > How to avoid images corruption? > > > Hello, > > On two of our old 3.6 DC, a recent series of VM migrations lead to some > issues : > - I'm putting a host into maintenance mode > - most of the VM are migrating nicely > - one remaining VM never migrates, and the logs are showing : > > * engine.log : "...VM has been paused due to I/O error..." > * vdsm.log : "...Improbable extension request for volume..." > > After digging amongst the RH BZ tickets, I saved the day by : > - stopping the VM > - lvchange -ay the adequate /dev/... > - qemu-img check [-r all] /rhev/blahblah > - lvchange -an... > - boot the VM > - enjoy! > > Yesterday this worked for a VM where only one error occurred on the qemu > image, and the repair was easily done by qemu-img. > > Today, facing the same issue on another VM, it failed because the errors > were very numerous, and also because of this message : > > [...] > Rebuilding refcount structure > ERROR writing refblock: No space left on device > qemu-img: Check failed: No space left on device > [...] > > The PV/VG/LV are far from being full, so I guess I don't where to look at. > I tried many ways to solve it but I'm not comfortable at all with qemu > images, corruption and solving, so I ended up exporting this VM (to an NFS > export domain), importing it into another DC : this had the side effect to > use qemu-img convert from qcow2 to qcow2, and (maybe?????) to solve some > errors??? > I also copied it into another qcow2 file with the same qemu-img convert > way, but it is leading to another clean qcow2 image without errors. > > I saw that on 4.x some bugs are fixed about VM migrations, but this is not > the point here. > I checked my SANs, my network layers, my blades, the OS (CentOS 7.2) of my > hosts, but I see nothing special. > > The real reason behind my message is not to know how to repair anything, > rather than to understand what could have lead to this situation? > Where to keep a keen eye? > > -- Nicolas ECARNOT _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Tue Feb 6 09:52:32 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 6 Feb 2018 11:52:32 +0200 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> <20180202115221.GA2787@redhat.com> <20180205221330.GR2787@redhat.com> Message-ID: On Feb 6, 2018 11:06 AM, "Luca 'remix_tj' Lorenzetto" < lorenzetto.luca at gmail.com> wrote: On Mon, Feb 5, 2018 at 11:13 PM, Richard W.M. Jones wrote: > http://libguestfs.org/virt-v2v.1.html#vmware-vcenter-resources > > You should be able to run multiple conversions in parallel > to improve throughput. > > The only long-term solution is to use a different method such as VMX > over SSH. vCenter is just fundamentally bad. 4 conversions in parallel works, but each one is very slow. But i think i've to blame vcenter cpu which is stuck at 100%. I assume its network interfaces are also a bottleneck as well. Certainly if they are 1g. Y. Thank you for the directions and suggestions, Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < lorenzetto.luca at gmail.com> _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Tue Feb 6 10:11:37 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Tue, 6 Feb 2018 11:11:37 +0100 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> <20180202115221.GA2787@redhat.com> <20180205221330.GR2787@redhat.com> Message-ID: Il 6 feb 2018 10:52 AM, "Yaniv Kaul" ha scritto: I assume its network interfaces are also a bottleneck as well. Certainly if they are 1g. Y. That's not the case, vcenter uses 10g and also all the involved hosts. We first supposed the culprit was network, but investigations has cleared its position. Network usage is under 40% with 4 ongoing migrations. Luca -------------- next part -------------- An HTML attachment was scrubbed... URL: From igoihman at redhat.com Tue Feb 6 10:13:52 2018 From: igoihman at redhat.com (Irit Goihman) Date: Tue, 6 Feb 2018 12:13:52 +0200 Subject: [ovirt-users] ovirt 4.2 vdsclient In-Reply-To: References: Message-ID: Hi, The command is `vdsm-client Task getInfo taskID=` You can see available arguments in JSON format using `vdsm-client Task getInfo -h` command. On Tue, Feb 6, 2018 at 10:36 AM, Alex K wrote: > Hi Benny, > > I was trying to do it with vdsm-client without success. > > vdsm-client Task -h > usage: vdsm-client Task [-h] method [arg=value] ... > > optional arguments: > -h, --help show this help message and exit > > Task methods: > method [arg=value] > getInfo Get information about a Task. > getStatus Get Task status information. > revert Rollback a Task to restore the previous system state. > clear Discard information about a finished Task. > stop Stop a currently running Task. > [root at v0 common]# vdsm-client Task getInfo > vdsm-client: Command Task.getInfo with args {} failed: > (code=-32603, message=Internal JSON-RPC error: {'reason': '__init__() > takes exactly 2 arguments (1 given)'}) > [root at v0 common]# vdsm-client Task getStatus > vdsm-client: Command Task.getStatus with args {} failed: > (code=-32603, message=Internal JSON-RPC error: {'reason': '__init__() > takes exactly 2 arguments (1 given)'}) > > What other arguments does this expect. When using Host namespace I am able > to run the available options. > > Thanx, > Alex > > > On Tue, Feb 6, 2018 at 10:24 AM, Benny Zlotnik > wrote: > >> It was replaced by vdsm-client[1] >> >> [1] - https://www.ovirt.org/develop/developer-guide/vdsm/vdsm-client/ >> >> On Tue, Feb 6, 2018 at 10:17 AM, Alex K wrote: >> >>> Hi all, >>> >>> I have a stuck snapshot removal from a VM which is blocking the VM to >>> start. >>> In ovirt 4.1 I was able to cancel the stuck task by running within SPM >>> host: >>> >>> vdsClient -s 0 getAllTasksStatuses >>> vdsClient -s 0 stopTask >>> >>> Is there a similar way to do at ovirt 4.2? >>> >>> Thanx, >>> Alex >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- IRIT GOIHMAN SOFTWARE ENGINEER EMEA VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. @redhatnews Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjones at redhat.com Tue Feb 6 10:19:29 2018 From: rjones at redhat.com (Richard W.M. Jones) Date: Tue, 6 Feb 2018 10:19:29 +0000 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> <20180202115221.GA2787@redhat.com> <20180205221330.GR2787@redhat.com> Message-ID: <20180206101929.GV2787@redhat.com> On Tue, Feb 06, 2018 at 11:11:37AM +0100, Luca 'remix_tj' Lorenzetto wrote: > Il 6 feb 2018 10:52 AM, "Yaniv Kaul" ha scritto: > > > I assume its network interfaces are also a bottleneck as well. Certainly if > they are 1g. > Y. > > > That's not the case, vcenter uses 10g and also all the involved hosts. > > We first supposed the culprit was network, but investigations has cleared > its position. Network usage is under 40% with 4 ongoing migrations. The problem is two-fold and is common to all vCenter transformations: (1) A single https connection is used and each block of data that is requested is processed serially. (2) vCenter has to forward each request to the ESXi hypervisor. (1) + (2) => most time is spent waiting on the lengthy round trips for each requested block of data. This is why overlapping multiple parallel conversions works and (although each conversion is just as slow) improves throughput, because you're filling in the long idle gaps by serving other conversions. This is also why other methods perform so much better. VMX over SSH uses a single connection but connects directly to the ESXi hypervisor, so cause (2) is eliminated. VMX over NFS eliminates VMware servers entirely and can make multiple parallel requests, eliminating (1) and (2). VDDK [in ideal circumstances] can mount the FC storage directly on the conversion host meaning the ordinary network is not even used and all requests travel over the SAN. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://people.redhat.com/~rjones/virt-df/ From ahadas at redhat.com Tue Feb 6 11:25:28 2018 From: ahadas at redhat.com (Arik Hadas) Date: Tue, 6 Feb 2018 13:25:28 +0200 Subject: [ovirt-users] oVirt 4.2 , VM stuck in "Migrating from" state. In-Reply-To: References: Message-ID: Hi, The problem you had is fixed already by https://gerrit.ovirt.org/#/c/86367/. I'm afraid you'll need to manually set the VM to Down in the database: update vm_dynamic set status=0 where vm_guid in (select vm_guid from vm_static where vm_name='') On Tue, Feb 6, 2018 at 11:20 AM, Eduardo Mayoral wrote: > Hi, > > Got a problem with oVirt 4.2 > > While putting a Host in maintenance mode, an VM has failed to migrate. > The end state is that the Web UI shows the VM as "Migrating from". > > The VM is not running in any Host in the cluster. > > This is the relevant message in the /var/log/ovirt-engine/engine.log > > 2018-02-06 09:09:05,379Z INFO > [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] > (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] VM > 'ab158ff3-a716-4655-9269-11738cd53b05'(repositorionuget) is running in > db and not running on VDS > '82b49615-9c65-4d8e-80e0-f10089cb4225'(llkh456.arsyslan.es) > 2018-02-06 09:09:05,381Z ERROR > [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] > (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Failed during > monitoring vm: ab158ff3-a716-4655-9269-11738cd53b05 , error is: {}: > java.lang.NullPointerException > at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer. > auditVmMigrationAbort(VmAnalyzer.java:440) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.abortVmMigration( > VmAnalyzer.java:432) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer. > proceedDisappearedVm(VmAnalyzer.java:794) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.analyze(VmAnalyzer. > java:135) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.lambda$ > analyzeVms$1(VmsMonitoring.java:136) > [vdsbroker.jar:] > at java.util.ArrayList.forEach(ArrayList.java:1255) > [rt.jar:1.8.0_151] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.analyzeVms( > VmsMonitoring.java:131) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.perform( > VmsMonitoring.java:94) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll( > PollVmStatsRefresher.java:43) > [vdsbroker.jar:] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_151] > at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [rt.jar:1.8.0_151] > at > org.glassfish.enterprise.concurrent.internal. > ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201( > ManagedScheduledThreadPoolExecutor.java:383) > [javax.enterprise.concurrent-1.0.jar:] > at > org.glassfish.enterprise.concurrent.internal. > ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run( > ManagedScheduledThreadPoolExecutor.java:534) > [javax.enterprise.concurrent-1.0.jar:] > at > java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_151] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_151] > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151] > at > org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ > ManagedThread.run(ManagedThreadFactoryImpl.java:250) > [javax.enterprise.concurrent-1.0.jar:] > at > org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ > ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) > > 2018-02-06 09:09:05,381Z ERROR > [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] > (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Exception:: > java.lang.NullPointerException > at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer. > auditVmMigrationAbort(VmAnalyzer.java:440) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.abortVmMigration( > VmAnalyzer.java:432) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer. > proceedDisappearedVm(VmAnalyzer.java:794) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.analyze(VmAnalyzer. > java:135) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.lambda$ > analyzeVms$1(VmsMonitoring.java:136) > [vdsbroker.jar:] > at java.util.ArrayList.forEach(ArrayList.java:1255) > [rt.jar:1.8.0_151] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.analyzeVms( > VmsMonitoring.java:131) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.perform( > VmsMonitoring.java:94) > [vdsbroker.jar:] > at > org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll( > PollVmStatsRefresher.java:43) > [vdsbroker.jar:] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_151] > at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [rt.jar:1.8.0_151] > at > org.glassfish.enterprise.concurrent.internal. > ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201( > ManagedScheduledThreadPoolExecutor.java:383) > [javax.enterprise.concurrent-1.0.jar:] > at > org.glassfish.enterprise.concurrent.internal. > ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run( > ManagedScheduledThreadPoolExecutor.java:534) > [javax.enterprise.concurrent-1.0.jar:] > at > java.util.concurrent.ThreadPoolExecutor.runWorker( > ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_151] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run( > ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_151] > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151] > at > org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ > ManagedThread.run(ManagedThreadFactoryImpl.java:250) > [javax.enterprise.concurrent-1.0.jar:] > at > org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ > ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) > > I already tried canceling the migration, powering off the VM, restarting > the engine service and restarting the vdsm on the host which is supposed > to have that VM. No success so far. unlock_entity.sh shows no locked > entities. > > Can somebody help on how to recover from this? > > Thanks! > > > -- > Eduardo Mayoral Jimeno (emayoral at arsys.es) > Administrador de sistemas. Departamento de Plataformas. Arsys internet. > +34 941 620 145 ext. 5153 > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emayoral at arsys.es Tue Feb 6 11:35:40 2018 From: emayoral at arsys.es (Eduardo Mayoral) Date: Tue, 6 Feb 2018 12:35:40 +0100 Subject: [ovirt-users] oVirt 4.2 , VM stuck in "Migrating from" state. In-Reply-To: References: Message-ID: <7a4ec16c-7f51-9e6f-e1ca-532b6153b792@arsys.es> Worked like a charm. Double thanks, for helping and for helping so fast! Best regards, Eduardo Mayoral Jimeno (emayoral at arsys.es) Administrador de sistemas. Departamento de Plataformas. Arsys internet. +34 941 620 145 ext. 5153 On 06/02/18 12:25, Arik Hadas wrote: > Hi, > > The problem you had is fixed already?by > https://gerrit.ovirt.org/#/c/86367/. > I'm afraid you'll need to manually set the VM to Down in the database: > update vm_dynamic set status=0 where vm_guid in ?(select vm_guid from > vm_static where vm_name='') > > On Tue, Feb 6, 2018 at 11:20 AM, Eduardo Mayoral > wrote: > > Hi, > > ??? Got a problem with oVirt 4.2 > > While putting a Host in maintenance mode, an VM has failed to migrate. > The end state is that the Web UI shows the VM as "Migrating from". > > The VM is not running in any Host in the cluster. > > This is the relevant message in the /var/log/ovirt-engine/engine.log > > 2018-02-06 09:09:05,379Z INFO? > [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] > (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] VM > 'ab158ff3-a716-4655-9269-11738cd53b05'(repositorionuget) is running in > db and not running on VDS > '82b49615-9c65-4d8e-80e0-f10089cb4225'(llkh456.arsyslan.es > ) > 2018-02-06 09:09:05,381Z ERROR > [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] > (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Failed during > monitoring vm: ab158ff3-a716-4655-9269-11738cd53b05 , error is: {}: > java.lang.NullPointerException > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.auditVmMigrationAbort(VmAnalyzer.java:440) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.abortVmMigration(VmAnalyzer.java:432) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.proceedDisappearedVm(VmAnalyzer.java:794) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.analyze(VmAnalyzer.java:135) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.lambda$analyzeVms$1(VmsMonitoring.java:136) > [vdsbroker.jar:] > ??????? at java.util.ArrayList.forEach(ArrayList.java:1255) > [rt.jar:1.8.0_151] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.analyzeVms(VmsMonitoring.java:131) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.perform(VmsMonitoring.java:94) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:43) > [vdsbroker.jar:] > ??????? at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_151] > ??????? at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [rt.jar:1.8.0_151] > ??????? at > org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) > [javax.enterprise.concurrent-1.0.jar:] > ??????? at > org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) > [javax.enterprise.concurrent-1.0.jar:] > ??????? at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_151] > ??????? at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_151] > ??????? at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151] > ??????? at > org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) > [javax.enterprise.concurrent-1.0.jar:] > ??????? at > org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) > > 2018-02-06 09:09:05,381Z ERROR > [org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring] > (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Exception:: > java.lang.NullPointerException > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.auditVmMigrationAbort(VmAnalyzer.java:440) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.abortVmMigration(VmAnalyzer.java:432) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.proceedDisappearedVm(VmAnalyzer.java:794) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer.analyze(VmAnalyzer.java:135) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.lambda$analyzeVms$1(VmsMonitoring.java:136) > [vdsbroker.jar:] > ??????? at java.util.ArrayList.forEach(ArrayList.java:1255) > [rt.jar:1.8.0_151] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.analyzeVms(VmsMonitoring.java:131) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.VmsMonitoring.perform(VmsMonitoring.java:94) > [vdsbroker.jar:] > ??????? at > org.ovirt.engine.core.vdsbroker.monitoring.PollVmStatsRefresher.poll(PollVmStatsRefresher.java:43) > [vdsbroker.jar:] > ??????? at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_151] > ??????? at > java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > [rt.jar:1.8.0_151] > ??????? at > org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.access$201(ManagedScheduledThreadPoolExecutor.java:383) > [javax.enterprise.concurrent-1.0.jar:] > ??????? at > org.glassfish.enterprise.concurrent.internal.ManagedScheduledThreadPoolExecutor$ManagedScheduledFutureTask.run(ManagedScheduledThreadPoolExecutor.java:534) > [javax.enterprise.concurrent-1.0.jar:] > ??????? at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_151] > ??????? at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_151] > ??????? at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_151] > ??????? at > org.glassfish.enterprise.concurrent.ManagedThreadFactoryImpl$ManagedThread.run(ManagedThreadFactoryImpl.java:250) > [javax.enterprise.concurrent-1.0.jar:] > ??????? at > org.jboss.as.ee.concurrent.service.ElytronManagedThreadFactory$ElytronManagedThread.run(ElytronManagedThreadFactory.java:78) > > I already tried canceling the migration, powering off the VM, > restarting > the engine service and restarting the vdsm on the host which is > supposed > to have that VM. No success so far. unlock_entity.sh shows no locked > entities. > > Can somebody help on how to recover from this? > > Thanks! > > > -- > Eduardo Mayoral Jimeno (emayoral at arsys.es ) > Administrador de sistemas. Departamento de Plataformas. Arsys > internet. > +34 941 620 145 ext. 5153 > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mpolednik at redhat.com Tue Feb 6 11:38:32 2018 From: mpolednik at redhat.com (Martin Polednik) Date: Tue, 6 Feb 2018 12:38:32 +0100 Subject: [ovirt-users] Documentation about vGPU in oVirt 4.2 In-Reply-To: References: <83cb8bad-24ed-b982-03e9-fe4c8a33ebd9@it-novum.com> Message-ID: <20180206113831.GA2376@dhcp130-229.brq.redhat.com> On 05/02/18 14:38 +0100, Gianluca Cecchi wrote: >On Fri, Feb 2, 2018 at 12:13 PM, Jordan, Marcel >wrote: > >> Hi, >> >> i have some NVIDIA Tesla P100 and V100 gpu in our oVirt 4.2 cluster and >> searching for a documentation how to use the new vGPU feature. Is there >> any documentation out there how i configure it correctly? >> >> -- >> Marcel Jordan >> >> >> >Possibly check what would become the official documentation for RHEV 4.2, >even if it could not map one-to-one with oVirt > >Admin guide here: >https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2-beta/html/administration_guide/sect-host_tasks#Preparing_GPU_Passthrough > >Planning and prerequisites guide here: >https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2-Beta/html/planning_and_prerequisites_guide/requirements#pci_device_requirements > >In oVirt 4.2 release notes I see these bugzilla entries that can help too... >https://bugzilla.redhat.com/show_bug.cgi?id=1481007 >https://bugzilla.redhat.com/show_bug.cgi?id=1482033 There are also blogposts about vGPU in 4.1.4/4.2 that you might find useful: https://mpolednik.github.io/2017/09/13/vgpu-in-ovirt/ https://mpolednik.github.io/2017/05/21/vfio-mdev/ mpolednik >HIH, >Gianluca >_______________________________________________ >Users mailing list >Users at ovirt.org >http://lists.ovirt.org/mailman/listinfo/users From nicolas at devels.es Tue Feb 6 11:45:58 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Tue, 06 Feb 2018 11:45:58 +0000 Subject: [ovirt-users] Add a disk and set the console for a VM in the user portal Message-ID: <88a631e88dfe041ddde01f858f7fdfa7@devels.es> Hi, We recently upgraded to oVirt 4.2.0 and we're testing things so we can determine if our production system might also be upgraded or not. We do an extensive use of the User Portal, I've granted the VmCreator and DiskProfileUser privileges on a user (the user has a quota as well), I logged in to the user portal, I can successfully create a VM setting its memory and CPUs but: 1) I can't see a way to change the console type. By default, when the machine is created, SPICE is chosen as the mechanism, and I'd like to change it to VNC, but I can't find a way. 2) I can't see a way to add a disk to the VM. I'm attaching a screenshot of what I see in the panel. Are some new privileges needed to add a disk or change the console type? Thanks -------------- next part -------------- A non-text attachment was scrubbed... Name: Captura de pantalla de 2018-02-06 11-43-35.png Type: image/png Size: 12012 bytes Desc: not available URL: From andreil1 at starlett.lv Tue Feb 6 12:01:10 2018 From: andreil1 at starlett.lv (Andrei V) Date: Tue, 6 Feb 2018 14:01:10 +0200 Subject: [ovirt-users] Problem - Ubuntu 16.04.3 Guest Weekly Freezes Message-ID: <5AE49D82-CE0D-495E-86A3-64B7AC9A43D4@starlett.lv> Hi ! I have strange and annoying problem with one VM on oVirt node 4.2 - weekly freezes of Ubuntu 16.04.3 with ISPConfig 3.1 active. ISPConfig is a Web GUI frontend (written in PHP) to Apache, Postfix, Dovecot, Amavis, Clam and ProFTPd. Separate engine PC, not hosted engine. Ubuntu 16.04.3 LTS (Xenial Xerus), 2 cores allocated, 8 GB RAM (only fraction is being used). kernel 4.13.0-32-generic 6300ESB Watchdog Timer Memory ballooning disables, and there are always about 7 GB of free RAM left. 4 VMs active, CPU load on node is low. Tried several kernel versions, no change. I can?t trace any problem in the log on Ubuntu guest. Even watchdog timer 6300ESB configured to reset does nothing (what is really strange). VM stops responding even to pings, VM screen is also frozen. oVirt engine don?t display IP address anymore, it means ovirt-guest-agent is dead. VM is in DMZ, and not connected to ovirtmgmt, but rather to bridged Ethernet interface. in oVirt I have defined network "DMZ Node10-NIC2?. On node: cd /etc/sysconfig/network-scripts/ tail ifcfg-enp3s4f1 DEVICE=enp3s4f1 BRIDGE=ond04ad91e59c14 ONBOOT=yes MTU=1500 DEFROUTE=no NM_CONTROLLED=no IPV6INIT=no Googling doesn?t show anything useful except attempt to change kernel version what I already did. 1) Any idea how to fix this freeze ? 2) While problem is not fixed, I can create cron script to handle stubborn VM on oVirt engine PC. Q: How to force power off, and then launch (after timeout e.g. 20sec) this VM from bash or Python script? Thanks in advance for any help. Andrei From nicolas at devels.es Tue Feb 6 12:13:41 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Tue, 06 Feb 2018 12:13:41 +0000 Subject: [ovirt-users] Add a disk and set the console for a VM in the user portal In-Reply-To: <88a631e88dfe041ddde01f858f7fdfa7@devels.es> References: <88a631e88dfe041ddde01f858f7fdfa7@devels.es> Message-ID: I can't even see other options, like adding NICs, or changing the machine type (server, desktop)... Was this removed on purpose or there's some permission(s) to grant? El 2018-02-06 11:45, nicolas at devels.es escribi?: > Hi, > > We recently upgraded to oVirt 4.2.0 and we're testing things so we can > determine if our production system might also be upgraded or not. We > do an extensive use of the User Portal, I've granted the VmCreator and > DiskProfileUser privileges on a user (the user has a quota as well), I > logged in to the user portal, I can successfully create a VM setting > its memory and CPUs but: > > 1) I can't see a way to change the console type. By default, when the > machine is created, SPICE is chosen as the mechanism, and I'd like to > change it to VNC, but I can't find a way. > 2) I can't see a way to add a disk to the VM. > > I'm attaching a screenshot of what I see in the panel. > > Are some new privileges needed to add a disk or change the console > type? > > Thanks > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From budic at onholyground.com Tue Feb 6 15:06:48 2018 From: budic at onholyground.com (Darrell Budic) Date: Tue, 6 Feb 2018 09:06:48 -0600 Subject: [ovirt-users] ovirt and gateway behavior In-Reply-To: References: Message-ID: <3824ED59-C36B-4925-AD3B-C96CDD5E08BC@onholyground.com> I?ve seen this sort of happen on my systems, the gateway ip goes down for some reason, and the engine restarts repeatedly, rending it unusable, even though it?s on the same ip subnet as all the host boxes and can still talk to the VDSMs. In my case, it doesn?t hurt the cluster or DC, but it?s annoying and unnecessary in my environment where the gateway isn?t important for cluster communications.. I can understand why using the ip of the gateway became a test as a proxy for network connectivity, but it seems like it?s something that isn?t always valid and maybe the local admin should have a choice of how it?s used. Something like the current fencing option for ?50% hosts down? as a double check, if you can still reach the vdsm hosts, don?t restart the engine vm. -Darrell > From: Yaniv Kaul > Subject: Re: [ovirt-users] ovirt and gateway behavior > Date: February 6, 2018 at 2:40:14 AM CST > To: Alex > Cc: Ovirt Users > > > > On Feb 5, 2018 2:21 PM, "Alex K" > wrote: > Hi all, > > I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The cluster is used to host several VMs. > I have observed that when gateway is lost (say the gateway device is down) the ovirt cluster goes down. > > Is the cluster down, or just the self-hosted engine? > > > It seems a bit extreme behavior especially when one does not care if the hosted VMs have connectivity to Internet or not. > > Are the VMs down? > The hosts? > Y. > > > Can this behavior be disabled? > > Thanx, > Alex > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Feb 6 15:27:40 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 6 Feb 2018 17:27:40 +0200 Subject: [ovirt-users] ovirt and gateway behavior In-Reply-To: References: Message-ID: Hi, I have seen hosts rendered unresponsive when gateway is lost. I will be able to provide more info once I prepare an environment and test this further. Thanx, Alex On Tue, Feb 6, 2018 at 10:40 AM, Yaniv Kaul wrote: > > > On Feb 5, 2018 2:21 PM, "Alex K" wrote: > > Hi all, > > I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The > cluster is used to host several VMs. > I have observed that when gateway is lost (say the gateway device is down) > the ovirt cluster goes down. > > > Is the cluster down, or just the self-hosted engine? > > > It seems a bit extreme behavior especially when one does not care if the > hosted VMs have connectivity to Internet or not. > > > Are the VMs down? > The hosts? > Y. > > > Can this behavior be disabled? > > Thanx, > Alex > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdeluca at gmail.com Tue Feb 6 15:32:00 2018 From: bdeluca at gmail.com (Ben De Luca) Date: Tue, 06 Feb 2018 15:32:00 +0000 Subject: [ovirt-users] ovirt and gateway behavior In-Reply-To: References: Message-ID: This is expected behaviour, even if it?s not very bright. It?s being used as a way to detect network is operating correctly. I got this trying to install on a network with out a gateway. It is insane as there are so many ways it breaks. My network admin turns off ICMP responses and death to network. On Tue 6. Feb 2018 at 16:27, Alex K wrote: > Hi, > > I have seen hosts rendered unresponsive when gateway is lost. > I will be able to provide more info once I prepare an environment and test > this further. > > Thanx, > Alex > > On Tue, Feb 6, 2018 at 10:40 AM, Yaniv Kaul wrote: > >> >> >> On Feb 5, 2018 2:21 PM, "Alex K" wrote: >> >> Hi all, >> >> I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The >> cluster is used to host several VMs. >> I have observed that when gateway is lost (say the gateway device is >> down) the ovirt cluster goes down. >> >> >> Is the cluster down, or just the self-hosted engine? >> >> >> It seems a bit extreme behavior especially when one does not care if the >> hosted VMs have connectivity to Internet or not. >> >> >> Are the VMs down? >> The hosts? >> Y. >> >> >> Can this behavior be disabled? >> >> Thanx, >> Alex >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Tue Feb 6 15:39:05 2018 From: msivak at redhat.com (Martin Sivak) Date: Tue, 6 Feb 2018 16:39:05 +0100 Subject: [ovirt-users] ovirt and gateway behavior In-Reply-To: References: Message-ID: Hi, ee use the ping check to see whether the host running hosted engine has connectivity with the rest of the cluster and users. We kill the VM in a hope that some other host will make the engine available to users again. We use the gateway by default as it is pretty common to have separate network for data center, but you can change the address if your topology is different. Best regards Martin Sivak On Tue, Feb 6, 2018 at 4:27 PM, Alex K wrote: > Hi, > > I have seen hosts rendered unresponsive when gateway is lost. > I will be able to provide more info once I prepare an environment and test > this further. > > Thanx, > Alex > > On Tue, Feb 6, 2018 at 10:40 AM, Yaniv Kaul wrote: >> >> >> >> On Feb 5, 2018 2:21 PM, "Alex K" wrote: >> >> Hi all, >> >> I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The >> cluster is used to host several VMs. >> I have observed that when gateway is lost (say the gateway device is down) >> the ovirt cluster goes down. >> >> >> Is the cluster down, or just the self-hosted engine? >> >> >> It seems a bit extreme behavior especially when one does not care if the >> hosted VMs have connectivity to Internet or not. >> >> >> Are the VMs down? >> The hosts? >> Y. >> >> >> Can this behavior be disabled? >> >> Thanx, >> Alex >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From msivak at redhat.com Tue Feb 6 15:45:31 2018 From: msivak at redhat.com (Martin Sivak) Date: Tue, 6 Feb 2018 16:45:31 +0100 Subject: [ovirt-users] ovirt and gateway behavior In-Reply-To: References: Message-ID: > This is expected behaviour, even if it?s not very bright. It?s being used as > a way to detect network is operating correctly. Correct, it is used to check whether users can reach the host and the VM that runs on it. There aren't that many options to check that. All require data exchange of some kind (ICMP req/res, TCP SYN/ACK, some UDP echo..). > It is insane as there are so many ways it breaks. My network admin turns > off ICMP responses and death to network. ICMP is an important signaling mechanism.. seriously, it is usually a bad idea to block it. > I got this trying to install on a network with out a gateway. How were your users accessing the VMs? Was this some kind of super secure deployment with no outside connectivity? Best regards Martin Sivak On Tue, Feb 6, 2018 at 4:32 PM, Ben De Luca wrote: > This is expected behaviour, even if it?s not very bright. It?s being used as > a way to detect network is operating correctly. > > I got this trying to install on a network with out a gateway. > > It is insane as there are so many ways it breaks. My network admin turns > off ICMP responses and death to network. > > On Tue 6. Feb 2018 at 16:27, Alex K wrote: >> >> Hi, >> >> I have seen hosts rendered unresponsive when gateway is lost. >> I will be able to provide more info once I prepare an environment and test >> this further. >> >> Thanx, >> Alex >> >> On Tue, Feb 6, 2018 at 10:40 AM, Yaniv Kaul wrote: >>> >>> >>> >>> On Feb 5, 2018 2:21 PM, "Alex K" wrote: >>> >>> Hi all, >>> >>> I have a 3 nodes ovirt 4.1 cluster, self hosted on top of glusterfs. The >>> cluster is used to host several VMs. >>> I have observed that when gateway is lost (say the gateway device is >>> down) the ovirt cluster goes down. >>> >>> >>> Is the cluster down, or just the self-hosted engine? >>> >>> >>> It seems a bit extreme behavior especially when one does not care if the >>> hosted VMs have connectivity to Internet or not. >>> >>> >>> Are the VMs down? >>> The hosts? >>> Y. >>> >>> >>> Can this behavior be disabled? >>> >>> Thanx, >>> Alex >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From bonaros.konstantinos at gmail.com Tue Feb 6 16:09:40 2018 From: bonaros.konstantinos at gmail.com (Konstantinos Bonaros) Date: Tue, 6 Feb 2018 18:09:40 +0200 Subject: [ovirt-users] test Message-ID: test, please ignore -------------- next part -------------- An HTML attachment was scrubbed... URL: From cma at cmadams.net Tue Feb 6 16:56:13 2018 From: cma at cmadams.net (Chris Adams) Date: Tue, 6 Feb 2018 10:56:13 -0600 Subject: [ovirt-users] Memory leaks in ovirt-ha-agent, vdsmd Message-ID: <20180206165613.GA23244@cmadams.net> I regularly see memory leaks in ovirt-ha-agent and vdsmd. For example, I have a two-node 4.2.0 test setup with a hosted engine on iSCSI. Right now, vdsmd on one node is using 7.8G RAM, and ovirt-ha-agent is using 1.1G on each node. I've had this kind of problem with 4.1 production systems as well; it just seems to be a recurring issue. I have to periodically go through and restart these services on the nodes. Occasionally I see sanlock use up a bunch of RAM as well. -- Chris Adams From rgolan at redhat.com Tue Feb 6 19:51:06 2018 From: rgolan at redhat.com (Roy Golan) Date: Tue, 06 Feb 2018 19:51:06 +0000 Subject: [ovirt-users] Memory leaks in ovirt-ha-agent, vdsmd In-Reply-To: <20180206165613.GA23244@cmadams.net> References: <20180206165613.GA23244@cmadams.net> Message-ID: On Tue, 6 Feb 2018 at 19:12 Chris Adams wrote: > I regularly see memory leaks in ovirt-ha-agent and vdsmd. For example, > I have a two-node 4.2.0 test setup with a hosted engine on iSCSI. Right > now, vdsmd on one node is using 7.8G RAM, and ovirt-ha-agent is using > 1.1G on each node. > > I've had this kind of problem with 4.1 production systems as well; it > just seems to be a recurring issue. I have to periodically go through > and restart these services on the nodes. Occasionally I see sanlock use > up a bunch of RAM as well. > > Can you please report it on bugzilla? > -- > Chris Adams > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Tue Feb 6 20:33:57 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Tue, 6 Feb 2018 22:33:57 +0200 Subject: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated In-Reply-To: References: Message-ID: On Tue, Feb 6, 2018 at 11:32 AM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > On Mon, Feb 5, 2018 at 7:20 PM, Maor Lipchuk wrote: > > Hi Luca, > > > > Thank you for your interst in the Disaster Recovery ansible solution, it > is > > great to see users get familiar with it. > > Please see my comments inline > > > > Regards, > > Maor > > > > On Mon, Feb 5, 2018 at 7:54 PM, Yaniv Kaul wrote: > >> > >> > >> > >> On Feb 5, 2018 5:00 PM, "Luca 'remix_tj' Lorenzetto" > >> wrote: > >> > >> Hello, > >> > >> i'm starting the implementation of our disaster recovery site with RHV > >> 4.1.latest for our production environment. > >> > >> Our production setup is very easy, with self hosted engine on dc > >> KVMPDCA, and virtual machines both in KVMPDCA and KVMPD dcs. All our > >> setup has an FC storage backend, which is EMC VPLEX/VMAX in KVMPDCA > >> and EMC VNX8000. Both storage arrays supports replication via their > >> own replication protocols (SRDF, MirrorView), so we'd like to delegate > >> to them the replication of data to the remote site, which is located > >> on another remote datacenter. > >> > >> In KVMPD DC we have some storage domains that contains non critical > >> VMs, which we don't want to replicate to remote site (in case of > >> failure they have a low priority and will be restored from a backup). > >> In our setup we won't replicate them, so will be not available for > >> attachment on remote site. Can be this be an issue? Do we require to > >> replicate everything? > > > > > > No, it is not required to replicate everything. > > If there are no disks on those storage domains that attached to your > > critical VMs/Templates you don't have to use them as part of yout mapping > > var file > > > > Excellent. > > >> > >> What about master domain? Do i require that the master storage domain > >> stays on a replicated volume or can be any of the available ones? > > > > > > > > You can choose which storage domains you want to recover. > > Basically, if a storage domain is indicated as "master" in the mapping > var > > file then it should be attached first to the Data Center. > > If your secondary setup already contains a master storage domain which > you > > dont care to replicate and recover, then you can configure your mapping > var > > file to only attach regular storage domains, simply indicate > > "dr_master_domain: False" in the dr_import_storages for all the storage > > domains. (You can contact me on IRC if you need some guidance with it) > > > > Good, > > that's my case. I don't need a new master domain on remote side, > because is an already up and running setup where i want to attach > replicated storage and run the critical VMs. > > > > >> > >> > >> I've seen that since 4.1 there's an API for updating OVF_STORE disks. > >> Do we require to invoke it with a frequency that is the compatible > >> with the replication frequency on storage side. > > > > > > > > No, you don't have to use the update OVF_STORE disk for replication. > > The OVF_STORE disk is being updated every 60 minutes (The default > > configuration value), > > > > What i need is that informations about vms is replicated to the remote > site with disk. > In an older test i had the issue that disks were replicated to remote > site, but vm configuration not! > I've found disks in the "Disk" tab of storage domain, but nothing on VM > Import. > Can you reproduce it and attach the logs of the setup before the disaster and after the recovery? That could happen in case of new created VMs and Templates which were not yet updated in the OVF_STORE disk, since the OVF_STORE update process was not running yet before the disaster. Since the time of a disaster can't be anticipated, gaps like this might happen. > > >> > >> We set at the moment > >> RPO to 1hr (even if planned RPO requires 2hrs). Does OVF_STORE gets > >> updated with the required frequency? > > > > > > > > OVF_STORE disk is being updated every 60 minutes but keep in mind that > the > > OVF_STORE is being updated internally in the engine so it might not be > > synced with the RPO which you configured. > > If I understood correctly, then you are right by indicating that the > data of > > the storage domain will be synced at approximatly 2 hours = RPO of 1hr + > > OVF_STORE update of 1hr > > > > We require that we can recover vms with a status that is up to 2 hours > ago. In worst case, from what you say, i think we'll be able to. > > [cut] > > > > Indeed, > > We also introduced several functionalities like detach of master storage > > domain , and attach of "dirty" master storage domain which are depndant > on > > the failover process, so unfortunatly to support a full recovery process > you > > will need oVirt 4.2 env. > > > > Ok, but if i keep master storage domain on a non replicate volume, do > i require this function? > Basically it should also fail on VM/Template registration in oVirt 4.1 since there are also other functionalities like mapping of OVF attributes which was added on VM/Templates registeration. > > I have to admit that i require, for subscription and support > requirements, to use RHV over oVirt. I've seen 4.2 is coming also from > that side, and we'll upgrade for sure when available. > > > [cut] > > > > > > Please feel free to share your comments and questions, I would very > > appreciate to know your user expirience. > > Sure, i'll do! And i'll bother you on irc if i need some guidance :-) > > Thank you so much, > > Luca > > > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.luca at gmail.com> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lveyde at redhat.com Wed Feb 7 09:23:47 2018 From: lveyde at redhat.com (Lev Veyde) Date: Wed, 7 Feb 2018 11:23:47 +0200 Subject: [ovirt-users] [ANN] oVirt 4.2.1 Sixth Release Candidate is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.1 Sixth Release Candidate, as of February 7th, 2018 This update is a release candidate of the first in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production. This release is available now for: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is already available - oVirt Node will be available soon [2] Additional Resources: * Read more about the oVirt 4.2.1 release highlights: http://www.ovirt.org/release/4. 2 . 1 / * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4. 2 . 1 / [2] http://resources.ovirt.org/pub/ovirt-4. 2-pre /iso/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From biakymet at redhat.com Wed Feb 7 10:05:23 2018 From: biakymet at redhat.com (Bohdan Iakymets) Date: Wed, 7 Feb 2018 11:05:23 +0100 Subject: [ovirt-users] Add a disk and set the console for a VM in the user portal In-Reply-To: References: <88a631e88dfe041ddde01f858f7fdfa7@devels.es> Message-ID: Hi, if you want VNC, you must choose it on adminportal: 'Edit VM' -> 'Console (you need to show advanced options)' -> 'Graphics protocols'. With the best regards, Bohdan Iakymets On Tue, Feb 6, 2018 at 1:13 PM, wrote: > I can't even see other options, like adding NICs, or changing the machine > type (server, desktop)... Was this removed on purpose or there's some > permission(s) to grant? > > > El 2018-02-06 11:45, nicolas at devels.es escribi?: > >> Hi, >> >> We recently upgraded to oVirt 4.2.0 and we're testing things so we can >> determine if our production system might also be upgraded or not. We >> do an extensive use of the User Portal, I've granted the VmCreator and >> DiskProfileUser privileges on a user (the user has a quota as well), I >> logged in to the user portal, I can successfully create a VM setting >> its memory and CPUs but: >> >> 1) I can't see a way to change the console type. By default, when the >> machine is created, SPICE is chosen as the mechanism, and I'd like to >> change it to VNC, but I can't find a way. >> 2) I can't see a way to add a disk to the VM. >> >> I'm attaching a screenshot of what I see in the panel. >> >> Are some new privileges needed to add a disk or change the console type? >> >> Thanks >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From valentin.bajrami at target-holding.nl Wed Feb 7 10:11:09 2018 From: valentin.bajrami at target-holding.nl (Valentin Bajrami) Date: Wed, 7 Feb 2018 11:11:09 +0100 Subject: [ovirt-users] Fwd: A possible bug on Fedora 27 In-Reply-To: <2e277f33-49c6-18ef-cf64-3d93b147a6ce@target-holding.nl> References: <2e277f33-49c6-18ef-cf64-3d93b147a6ce@target-holding.nl> Message-ID: <80e6decf-5c9c-04e0-3aa7-1e517e0ab69a@target-holding.nl> Have you been able to have a look at this issue? Thanks in advance -------- Forwarded Message -------- Subject: [ovirt-users] A possible bug on Fedora 27 Date: Tue, 30 Jan 2018 09:45:55 +0100 From: Valentin Bajrami To: users at ovirt.org Hi Community, Recently we discovered that our VM's became unstable after upgrading from Fedora 26 to Fedora 27. The journalctl log shows the following Jan 29 20:03:28 host1.project.local libvirtd[2741]: 2018-01-29 19:03:28.789+0000: 2741: error : qemuMonitorIO:705 : internal error: End of file from qemu monitor Jan 29 20:09:14 host1.project.local libvirtd[2741]: 2018-01-29 19:09:14.111+0000: 2741: error : qemuMonitorIO:705 : internal error: End of file from qemu monitor Jan 29 20:10:29 host1.project.local libvirtd[2741]: 2018-01-29 19:10:29.584+0000: 2741: error : qemuMonitorIO:705 : internal error: End of file from qemu monitor A similar bug report is already present here: https://bugzilla.redhat.com/show_bug.cgi?id=1523314 but doesn't reflect our problem entirely. This bug seems to be triggered only when a VM is shut down gracefully. In our case this is being triggered without attempting to shutdown a VM. Again, this is causing the VM's to be unstable and eventually they'll shut down by themselves. Do you have any clue what could be causing this? -- Met vriendelijke groeten, Valentin Bajrami _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Wed Feb 7 10:54:05 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Wed, 7 Feb 2018 11:54:05 +0100 Subject: [ovirt-users] Add a disk and set the console for a VM in the user portal In-Reply-To: References: <88a631e88dfe041ddde01f858f7fdfa7@devels.es> Message-ID: <25A9640B-418E-4F5D-B93C-679ECF6EAC88@redhat.com> > On 6 Feb 2018, at 13:13, nicolas at devels.es wrote: > > I can't even see other options, like adding NICs, or changing the machine type (server, desktop)... Was this removed on purpose or there's some permission(s) to grant? it just hasn?t been coded. Feel free to open issues or submit patches for the project[1] > > El 2018-02-06 11:45, nicolas at devels.es escribi?: >> Hi, >> We recently upgraded to oVirt 4.2.0 and we're testing things so we can >> determine if our production system might also be upgraded or not. We >> do an extensive use of the User Portal, I've granted the VmCreator and >> DiskProfileUser privileges on a user (the user has a quota as well), I >> logged in to the user portal, I can successfully create a VM setting >> its memory and CPUs but: yeah, the VM creation in ovirt-ewb-ui is simplistic and relies on the original Template mostly. >> 1) I can't see a way to change the console type. By default, when the >> machine is created, SPICE is chosen as the mechanism, and I'd like to >> change it to VNC, but I can't find a way. it?s using whatever you have in the Template. Could you change it there? If you need flexibility then use SPICE+VNC, then clients can choose either. Thanks, michal [1] https://github.com/oVirt/ovirt-web-ui >> 2) I can't see a way to add a disk to the VM. >> I'm attaching a screenshot of what I see in the panel. >> Are some new privileges needed to add a disk or change the console type? >> Thanks >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From nicolas at devels.es Wed Feb 7 10:56:14 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Wed, 07 Feb 2018 10:56:14 +0000 Subject: [ovirt-users] Add a disk and set the console for a VM in the user portal In-Reply-To: References: <88a631e88dfe041ddde01f858f7fdfa7@devels.es> Message-ID: <53673ace4a6f029013d58bd882cd7bfc@devels.es> Sorry, but I don't understand what's the new VM portal purpose then... I mean, you can create a VM but you can't modify the console type, nor add disks, nor add NIC interfaces? If that's how the new VM portal works, we can't upgrade our oVirt infrastructure because as I said, we extensively use the VM portal and users need to create their VMs and fully customize them. Regards. El 2018-02-07 10:05, Bohdan Iakymets escribi?: > Hi, > > if you want VNC, you must choose it on adminportal: 'Edit VM' -> > 'Console (you need to show advanced options)' -> 'Graphics protocols'. > > With the best regards, > Bohdan Iakymets > > On Tue, Feb 6, 2018 at 1:13 PM, wrote: > >> I can't even see other options, like adding NICs, or changing the >> machine type (server, desktop)... Was this removed on purpose or >> there's some permission(s) to grant? >> >> El 2018-02-06 11:45, nicolas at devels.es escribi?: >> >>> Hi, >>> >>> We recently upgraded to oVirt 4.2.0 and we're testing things so >>> we can >>> determine if our production system might also be upgraded or not. >>> We >>> do an extensive use of the User Portal, I've granted the >>> VmCreator and >>> DiskProfileUser privileges on a user (the user has a quota as >>> well), I >>> logged in to the user portal, I can successfully create a VM >>> setting >>> its memory and CPUs but: >>> >>> 1) I can't see a way to change the console type. By default, when >>> the >>> machine is created, SPICE is chosen as the mechanism, and I'd >>> like to >>> change it to VNC, but I can't find a way. >>> 2) I can't see a way to add a disk to the VM. >>> >>> I'm attaching a screenshot of what I see in the panel. >>> >>> Are some new privileges needed to add a disk or change the >>> console type? >>> >>> Thanks >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users [1] >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users [1] > > > > Links: > ------ > [1] http://lists.ovirt.org/mailman/listinfo/users From nicolas at devels.es Wed Feb 7 10:59:08 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Wed, 07 Feb 2018 10:59:08 +0000 Subject: [ovirt-users] Add a disk and set the console for a VM in the user portal In-Reply-To: <25A9640B-418E-4F5D-B93C-679ECF6EAC88@redhat.com> References: <88a631e88dfe041ddde01f858f7fdfa7@devels.es> <25A9640B-418E-4F5D-B93C-679ECF6EAC88@redhat.com> Message-ID: <2879861d7279a5ced0b8c42f845ab284@devels.es> El 2018-02-07 10:54, Michal Skrivanek escribi?: >> On 6 Feb 2018, at 13:13, nicolas at devels.es wrote: >> >> I can't even see other options, like adding NICs, or changing the >> machine type (server, desktop)... Was this removed on purpose or >> there's some permission(s) to grant? > > it just hasn?t been coded. Feel free to open issues or submit patches > for the project[1] > I will open the issues for that. I feel the new interface has been a nice improvement but if it doesn't contain all the former functionalities it's not useful at all. Our users create their machines from scratch with no template (they are engineering students and they need to know how to do some basic stuff). Thank you. > >> >> El 2018-02-06 11:45, nicolas at devels.es escribi?: >>> Hi, >>> We recently upgraded to oVirt 4.2.0 and we're testing things so we >>> can >>> determine if our production system might also be upgraded or not. We >>> do an extensive use of the User Portal, I've granted the VmCreator >>> and >>> DiskProfileUser privileges on a user (the user has a quota as well), >>> I >>> logged in to the user portal, I can successfully create a VM setting >>> its memory and CPUs but: > > yeah, the VM creation in ovirt-ewb-ui is simplistic and relies on the > original Template mostly. > >>> 1) I can't see a way to change the console type. By default, when the >>> machine is created, SPICE is chosen as the mechanism, and I'd like to >>> change it to VNC, but I can't find a way. > > it?s using whatever you have in the Template. Could you change it > there? If you need flexibility then use SPICE+VNC, then clients can > choose either. > > Thanks, > michal > > [1] https://github.com/oVirt/ovirt-web-ui > >>> 2) I can't see a way to add a disk to the VM. >>> I'm attaching a screenshot of what I see in the panel. >>> Are some new privileges needed to add a disk or change the console >>> type? >>> Thanks >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users From spfma.tech at e.mail.fr Wed Feb 7 11:15:48 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Wed, 07 Feb 2018 12:15:48 +0100 Subject: [ovirt-users] Network configuration validation error Message-ID: <20180207111548.5C1BCE2262@smtp01.mail.de> Hi, I am experiencing a new problem : when I try to modify something in the network setup on the second node (added to the cluster after installing the engine on the other one) using the Engine GUI, I get the following error when validating : must match "^\b((25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" Attribut : ipConfiguration.iPv4Addresses[0].gateway Moreover, on the general status of ther server, I have a "Host has no default route" alert. The ovirtmgmt network has a defined gateway of course, and the storage network has none because it is not required. Both server have the same setup, with different addresses of course :-) I have not been able to find anything useful in the logs. Is this a bug or am I doing something wrong ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Wed Feb 7 12:23:40 2018 From: mburman at redhat.com (Michael Burman) Date: Wed, 7 Feb 2018 14:23:40 +0200 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <20180207111548.5C1BCE2262@smtp01.mail.de> References: <20180207111548.5C1BCE2262@smtp01.mail.de> Message-ID: Hi This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - https://bugzilla.redhat.com/show_bug.cgi?id=1528906 The no default route bug was fixed in - https://bugzilla.redhat.com/show_bug.cgi?id=1477589 Thanks, On Wed, Feb 7, 2018 at 1:15 PM, wrote: > > Hi, > I am experiencing a new problem : when I try to modify something in the > network setup on the second node (added to the cluster after installing the > engine on the other one) using the Engine GUI, I get the following error > when validating : > > must match "^\b((25[0-5]|2[0-4]\d|[01]\d\ > d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" > Attribut : ipConfiguration.iPv4Addresses[0].gateway > > Moreover, on the general status of ther server, I have a "Host has no > default route" alert. > > The ovirtmgmt network has a defined gateway of course, and the storage > network has none because it is not required. Both server have the same > setup, with different addresses of course :-) > > I have not been able to find anything useful in the logs. > > Is this a bug or am I doing something wrong ? > > Regards > > ------------------------------ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Wed Feb 7 13:00:04 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Wed, 07 Feb 2018 14:00:04 +0100 Subject: [ovirt-users] Network configuration validation error In-Reply-To: References: Message-ID: <20180207130005.1BFD2E12B8@smtp01.mail.de> Hi, Thanks a lot for your answer. I applied some updates at node level, but I forgot to upgrade the engine ! When I try to do so I get a strange error : "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." Here are the installed packets on my nodes : python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 ovirt-imageio-common-1.2.0-1.el7.centos.noarch ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.0-1.el7.centos.noarch ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64 ovirt-host-4.2.0-1.el7.centos.x86_64 ovirt-host-deploy-1.7.0-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch ovirt-vmconsole-host-1.0.4-1.el7.noarch cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.noarch What I am supposed to do ? I see no newer packages available. Regards Le 07-Feb-2018 13:23:43 +0100, mburman at redhat.com a crit: Hi This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - https://bugzilla.redhat.com/show_bug.cgi?id=1528906 The no default route bug was fixed in - https://bugzilla.redhat.com/show_bug.cgi?id=1477589 Thanks, On Wed, Feb 7, 2018 at 1:15 PM, wrote: Hi, I am experiencing a new problem : when I try to modify something in the network setup on the second node (added to the cluster after installing the engine on the other one) using the Engine GUI, I get the following error when validating : must match "^\b((25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" Attribut : ipConfiguration.iPv4Addresses[0].gateway Moreover, on the general status of ther server, I have a "Host has no default route" alert. The ovirtmgmt network has a defined gateway of course, and the storage network has none because it is not required. Both server have the same setup, with different addresses of course :-) I have not been able to find anything useful in the logs. Is this a bug or am I doing something wrong ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From P.Staniforth at leedsbeckett.ac.uk Wed Feb 7 14:31:46 2018 From: P.Staniforth at leedsbeckett.ac.uk (Staniforth, Paul) Date: Wed, 7 Feb 2018 14:31:46 +0000 Subject: [ovirt-users] 4.1 slot user portal vm creation In-Reply-To: <2D9D908C-4079-405B-A5C4-0DF8557A63CE@redhat.com> References: <1517235400996.98601@leedsbeckett.ac.uk>, <2D9D908C-4079-405B-A5C4-0DF8557A63CE@redhat.com> Message-ID: <1518013906681.42708@leedsbeckett.ac.uk> Hello, After further testing and restoring the engine to a test system it seems that even when listing templates using the web API it is very slow whereas listing VMs isn't slow. Even if using admin at internal and --header 'Filter:True' it's slow whereas without the header it's fast. As our test system has no storage or hosts is it possible to delete templates from the database as a restore from October doesn't seem to have the problem. Thanks, Paul S. ________________________________ From: Michal Skrivanek Sent: 29 January 2018 14:58 To: Staniforth, Paul Cc: users at ovirt.org Subject: Re: [ovirt-users] 4.1 slot user portal vm creation On 29 Jan 2018, at 15:16, Staniforth, Paul > wrote: Hello, We are experiences slow response when trying to create a new vm in the user portal (for some users the New Virtual Machine page doesn't get created). Also in the Templates page of the user portal it doesn't list the templates, just has the 3 waiting to load icons flashing. In the admin portal it lists the templates with no problem. How many users using that user portal do you have? It may be just that slow because it's too many Note it's completely removed from 4.2 and teh ovirt-web-ui does provide only limited New VM capabilities - but it should perform much better Thanks, michal We are running 4.1.9 on the engine and nodes. Any help appreciated. Thanks, Paul S. To view the terms under which this email is distributed, please go to:- http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users To view the terms under which this email is distributed, please go to:- http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From elinovemail at abv.bg Wed Feb 7 14:54:38 2018 From: elinovemail at abv.bg (eee ffff) Date: Wed, 7 Feb 2018 16:54:38 +0200 (EET) Subject: [ovirt-users] Migration of a VM from ovirt 3.6 to ovirt 4.2 Message-ID: <1035527031.436309.1518015280660.JavaMail.apache@nm21.abv.bg> Dear ovirt-users, I would like to copy the VMs that I have now on a running ovirt 3.6 Data Center to a new ovirt 4.2 Data Center, located in a different building. An export domain is not an option, as I would need to upgrade the ovirt 3.6 host to 4.2 and (as this is an operation that I would have to do multiple times) constantly upgrading and downgrading a host, so that it would be compatible to the ovirt environment does not make sense. Do you have other suggestions? Cheers, Eli -------------- next part -------------- An HTML attachment was scrubbed... URL: From sleviim at redhat.com Wed Feb 7 15:31:28 2018 From: sleviim at redhat.com (Shani Leviim) Date: Wed, 7 Feb 2018 17:31:28 +0200 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: Hi Alex, Sorry for the mail's delay. >From a brief look at your logs, I've noticed that the error you've got at the engine's log was logged at 2018-02-03 00:22:56, while your vdsm's log ends at 2018-02-03 00:01:01. Is there a way you can reproduce a fuller vdsm log? *Regards,* *Shani Leviim* On Sat, Feb 3, 2018 at 5:41 PM, Alex K wrote: > Attaching vdm log from host that trigerred the error, where the Vm that > was being cloned was running at that time. > > thanx, > Alex > > On Sat, Feb 3, 2018 at 5:20 PM, Yaniv Kaul wrote: > >> >> >> On Feb 3, 2018 3:24 PM, "Alex K" wrote: >> >> Hi All, >> >> I have reproduced the backups failure. The VM that failed is named >> Win-FileServer and is a Windows 2016 server 64bit with 300GB of disk. >> During the cloning step the VM went unresponsive and I had to stop/start >> it. >> I am attaching the logs.I have another VM with same OS (named DC-Server >> within the logs) but with smaller disk (60GB) which does not give any error >> when it is cloned. >> I see a line: >> >> EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, Call >> Stack: null, Custom ID: null, Custom Event ID: -1, Message: VDSM >> v2.sitedomain command SnapshotVDS failed: Message timeout which can be >> caused by communication issues >> >> >> I suggest adding relevant vdsm.log as well. >> Y. >> >> >> I appreciate any advise why I am facing such issue with the backups. >> >> thanx, >> Alex >> >> On Tue, Jan 30, 2018 at 12:49 AM, Alex K wrote: >> >>> Ok. I will reproduce and collect logs. >>> >>> Thanx, >>> Alex >>> >>> On Jan 29, 2018 20:21, "Mahdi Adnan" wrote: >>> >>> I have Windows VMs, both client and server. >>> if you provide the engine.log file we might have a look at it. >>> >>> >>> -- >>> >>> Respectfully >>> *Mahdi A. Mahdi* >>> >>> ------------------------------ >>> *From:* Alex K >>> *Sent:* Monday, January 29, 2018 5:40 PM >>> *To:* Mahdi Adnan >>> *Cc:* users >>> *Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM >>> >>> Hi, >>> >>> I have observed this logged at host when the issue occurs: >>> >>> VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer >>> >>> or >>> >>> VDSM host.domain command GetStatsVDS failed: Connection reset by peer >>> >>> At engine logs have not been able to correlate. >>> >>> Are you hosting Windows 2016 server and Windows 10 VMs? >>> The weird is that I have same setup on other clusters with no issues. >>> >>> Thanx, >>> Alex >>> >>> On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan >>> wrote: >>> >>> Hi, >>> >>> We have a cluster of 17 nodes, backed by GlusterFS storage, and using >>> this same script for backup. >>> we have no issues with it so far. >>> have you checked engine log file ? >>> >>> >>> -- >>> >>> Respectfully >>> *Mahdi A. Mahdi* >>> >>> ------------------------------ >>> *From:* users-bounces at ovirt.org on behalf of >>> Alex K >>> *Sent:* Wednesday, January 24, 2018 4:18 PM >>> *To:* users >>> *Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM >>> >>> Hi all, >>> >>> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on >>> top glusterfs. >>> On some VMs (especially one Windows server 2016 64bit with 500 GB of >>> disk). Guest agents are installed at VMs. i almost always observe that >>> during the backup of the VM the VM is rendered unresponsive (dashboard >>> shows a question mark at the VM status and VM does not respond to ping or >>> to anything). >>> >>> For scheduled backups I use: >>> >>> https://github.com/wefixit-AT/oVirtBackup >>> >>> The script does the following: >>> >>> 1. snapshot VM (this is done ok without any failure) >>> >>> 2. Clone snapshot (this steps renders the VM unresponsive) >>> >>> 3. Export Clone >>> >>> 4. Delete clone >>> >>> 5. Delete snapshot >>> >>> >>> Do you have any similar experience? Any suggestions to address this? >>> >>> I have never seen such issue with hosted Linux VMs. >>> >>> The cluster has enough storage to accommodate the clone. >>> >>> >>> Thanx, >>> >>> Alex >>> >>> >>> >>> >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sleviim at redhat.com Wed Feb 7 15:44:53 2018 From: sleviim at redhat.com (Shani Leviim) Date: Wed, 7 Feb 2018 17:44:53 +0200 Subject: [ovirt-users] GUI trouble when adding FC datadomain In-Reply-To: References: Message-ID: Hi, There's a fix available for ovirt-engine-4.2.1 [1] [1] https://bugzilla.redhat.com/show_bug.cgi?id=1524126 *Regards,* *Shani Leviim* On Fri, Feb 2, 2018 at 2:18 PM, Yaniv Kaul wrote: > > > On Feb 2, 2018 1:09 PM, "Roberto Nunin" wrote: > > Hi Yaniv > > Currently Engine is 4.2.0.2-1 on CentOS7.4 > I've used using oVirt Node image 4.2-2017122007.iso > > LUN I need is certainly empty. (the second one in the list). > > > Please file a bug with logs, so we can understand the issue better. > Y. > > > > > 2018-02-02 13:01 GMT+01:00 Yaniv Kaul : > >> Which version are you using? Are you sure the LUNs are empty? >> Y. >> >> >> On Feb 2, 2018 11:19 AM, "Roberto Nunin" wrote: >> >>> Hi all >>> >>> I'm trying to setup ad HE cluster, with FC domain. >>> HE is also on FC. >>> >>> When I try to add the first domain in the datacenter, I've this form: >>> >>> [image: Immagine incorporata 1] >>> >>> So I'm not able to choose any of the three volumes currently masked >>> towards the chosen host. >>> I've tried all browser I've: Firefox 58, Chrome 63, IE 11, MS Edge, with >>> no changes. >>> >>> Tried to click in the rows, scrolling etc. with no success. >>> >>> Someone has found the same issue ? >>> Thanks in advance >>> >>> -- >>> Roberto Nunin >>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> > > > -- > Roberto Nunin > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 65845 bytes Desc: not available URL: From nicolas at ecarnot.net Wed Feb 7 17:06:32 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Wed, 7 Feb 2018 18:06:32 +0100 Subject: [ovirt-users] qcow2 images corruption Message-ID: Hello, TL; DR : qcow2 images keep getting corrupted. Any workaround? Long version: This discussion has already been launched by me on the oVirt and on qemu-block mailing list, under similar circumstances but I learned further things since months and here are some informations : - We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using CentOS 7.{2,3} hosts - Hosts : - CentOS 7.2 1511 : - Kernel = 3.10.0 327 - KVM : 2.3.0-31 - libvirt : 1.2.17 - vdsm : 4.17.32-1 - CentOS 7.3 1611 : - Kernel 3.10.0 514 - KVM : 2.3.0-31 - libvirt 2.0.0-10 - vdsm : 4.17.32-1 - Our storage is 2 Equallogic SANs connected via iSCSI on a dedicated network - Depends on weeks, but all in all, there are around 32 hosts, 8 storage domains and for various reasons, very few VMs (less than 200). - One peculiar point is that most of our VMs are provided an additional dedicated network interface that is iSCSI-connected to some volumes of our SAN - these volumes not being part of the oVirt setup. That could lead to a lot of additional iSCSI traffic. From times to times, a random VM appears paused by oVirt. Digging into the oVirt engine logs, then into the host vdsm logs, it appears that the host considers the qcow2 image as corrupted. Along what I consider as a conservative behavior, vdsm stops any interaction with this image and marks it as paused. Any try to unpause it leads to the same conservative pause. After having found (https://access.redhat.com/solutions/1173623) the right logical volume hosting the qcow2 image, I can run qemu-img check on it. - On 80% of my VMs, I find no errors. - On 15% of them, I find Leaked cluster errors that I can correct using "qemu-img check -r all" - On 5% of them, I find Leaked clusters errors and further fatal errors, which can not be corrected with qemu-img. In rare cases, qemu-img can correct them, but destroys large parts of the image (becomes unusable), and on other cases it can not correct them at all. Months ago, I already sent a similar message but the error message was about No space left on device (https://www.mail-archive.com/qemu-block at gnu.org/msg00110.html). This time, I don't have this message about space, but only corruption. I kept reading and found a similar discussion in the Proxmox group : https://lists.ovirt.org/pipermail/users/2018-February/086750.html https://forum.proxmox.com/threads/qcow2-corruption-after-snapshot-or-heavy-disk-i-o.32865/page-2 What I read similar to my case is : - usage of qcow2 - heavy disk I/O - using the virtio-blk driver In the proxmox thread, they tend to say that using virtio-scsi is the solution. Having asked this question to oVirt experts (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's not clear the driver is to blame. I agree with the answer Yaniv Kaul gave to me, saying I have to properly report the issue, so I'm longing to know which peculiar information I can give you now. As you can imagine, all this setup is in production, and for most of the VMs, I can not "play" with them. Moreover, we launched a campaign of nightly stopping every VM, qemu-img check them one by one, then boot. So it might take some time before I find another corrupted image. (which I'll preciously store for debug) Other informations : We very rarely do snapshots, but I'm close to imagine that automated migrations of VMs could trigger similar behaviors on qcow2 images. Last point about the versions we use : yes that's old, yes we're planning to upgrade, but we don't know when. Regards, -- Nicolas ECARNOT From cmar at eurotux.com Wed Feb 7 17:24:51 2018 From: cmar at eurotux.com (Carlos Rodrigues) Date: Wed, 07 Feb 2018 17:24:51 +0000 Subject: [ovirt-users] Clear name_server table entries Message-ID: <1518024291.3537.5.camel@eurotux.com> Hi, I'm getting the following problem: https://bugzilla.redhat.com/show_bug.cgi?id=1530944#c3 and after fix DNS entries no /etc/resolv.conf on host, i have to many entries on name_server table: engine=# select count(*) from name_server; count ------- 31401 (1 row) I would like to know if may i delete this entries? Best regards, -- Carlos Rodrigues Engenheiro de Software S?nior Eurotux Inform?tica, S.A. | www.eurotux.com (t) +351 253 680 300 (m) +351 911 926 110 From nicolas at ecarnot.net Wed Feb 7 20:42:50 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Wed, 7 Feb 2018 21:42:50 +0100 Subject: [ovirt-users] qcow2 images corruption Message-ID: Hello, TL; DR : qcow2 images keep getting corrupted. Any workaround? Long version: This discussion has already been launched by me on the oVirt and on qemu-block mailing list, under similar circumstances but I learned further things since months and here are some informations : - We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using CentOS 7.{2,3} hosts - Hosts : - CentOS 7.2 1511 : - Kernel = 3.10.0 327 - KVM : 2.3.0-31 - libvirt : 1.2.17 - vdsm : 4.17.32-1 - CentOS 7.3 1611 : - Kernel 3.10.0 514 - KVM : 2.3.0-31 - libvirt 2.0.0-10 - vdsm : 4.17.32-1 - Our storage is 2 Equallogic SANs connected via iSCSI on a dedicated network - Depends on weeks, but all in all, there are around 32 hosts, 8 storage domains and for various reasons, very few VMs (less than 200). - One peculiar point is that most of our VMs are provided an additional dedicated network interface that is iSCSI-connected to some volumes of our SAN - these volumes not being part of the oVirt setup. That could lead to a lot of additional iSCSI traffic. From times to times, a random VM appears paused by oVirt. Digging into the oVirt engine logs, then into the host vdsm logs, it appears that the host considers the qcow2 image as corrupted. Along what I consider as a conservative behavior, vdsm stops any interaction with this image and marks it as paused. Any try to unpause it leads to the same conservative pause. After having found (https://access.redhat.com/solutions/1173623) the right logical volume hosting the qcow2 image, I can run qemu-img check on it. - On 80% of my VMs, I find no errors. - On 15% of them, I find Leaked cluster errors that I can correct using "qemu-img check -r all" - On 5% of them, I find Leaked clusters errors and further fatal errors, which can not be corrected with qemu-img. In rare cases, qemu-img can correct them, but destroys large parts of the image (becomes unusable), and on other cases it can not correct them at all. Months ago, I already sent a similar message but the error message was about No space left on device (https://www.mail-archive.com/qemu-block at gnu.org/msg00110.html). This time, I don't have this message about space, but only corruption. I kept reading and found a similar discussion in the Proxmox group : https://lists.ovirt.org/pipermail/users/2018-February/086750.html https://forum.proxmox.com/threads/qcow2-corruption-after-snapshot-or-heavy-disk-i-o.32865/page-2 What I read similar to my case is : - usage of qcow2 - heavy disk I/O - using the virtio-blk driver In the proxmox thread, they tend to say that using virtio-scsi is the solution. Having asked this question to oVirt experts (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's not clear the driver is to blame. I agree with the answer Yaniv Kaul gave to me, saying I have to properly report the issue, so I'm longing to know which peculiar information I can give you now. As you can imagine, all this setup is in production, and for most of the VMs, I can not "play" with them. Moreover, we launched a campaign of nightly stopping every VM, qemu-img check them one by one, then boot. So it might take some time before I find another corrupted image. (which I'll preciously store for debug) Other informations : We very rarely do snapshots, but I'm close to imagine that automated migrations of VMs could trigger similar behaviors on qcow2 images. Last point about the versions we use : yes that's old, yes we're planning to upgrade, but we don't know when. Regards, -- Nicolas ECARNOT _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users From maozza at gmail.com Wed Feb 7 21:53:04 2018 From: maozza at gmail.com (maoz zadok) Date: Wed, 7 Feb 2018 23:53:04 +0200 Subject: [ovirt-users] Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider Message-ID: Hello there, I'm following https://www.ovirt.org/develop/release-management/features/virt/KvmToOvirt/ guide in order to import VMS from Libvirt to oVirt using ssh. URL: "qemu+ssh://host1.example.org/system" and get the following error: Failed to communicate with the external provider, see log for additional details. *oVirt agent log:* *- Failed to retrieve VMs information from external server qemu+ssh://XXX.XXX.XXX.XXX/system* *- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot recv data: Host key verification failed.: Connection reset by peer* *remote host sshd DEBUG log:* *Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port 48148 on XXX.XXX.XXX.123 port 22* *Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0; client software version OpenSSH_7.4* *Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000* *Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string SSH-2.0-OpenSSH_7.4* *Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode for protocol 2.0* *Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74 [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm: curve25519-sha256 [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm: ecdsa-sha2-nistp256 [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher: chacha20-poly1305 at openssh.com MAC: compression: none [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher: chacha20-poly1305 at openssh.com MAC: compression: none [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147 port 48148 [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]* *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup* *Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child 110006* *Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.* *Feb 7 16:38:29 XXX sshd[110007]: debug1: Set /proc/self/oom_score_adj to 0* *Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 newsock 5 pipe 7 sock 8* *Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after dupping: 3, 3* *Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 port 48150 on XXX.XXX.XXX.123 port 22* *Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version 2.0; client software version OpenSSH_7.4* *Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000* *Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string SSH-2.0-OpenSSH_7.4* *Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode for protocol 2.0* *Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74 [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm: curve25519-sha256 [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm: ecdsa-sha2-nistp256 [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher: chacha20-poly1305 at openssh.com MAC: compression: none [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher: chacha20-poly1305 at openssh.com MAC: compression: none [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 blocks [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: Connection closed by XXX.XXX.XXX.147 port 48150 [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]* *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup* *Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child 110008* *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.* *Feb 7 16:38:30 XXX sshd[110009]: debug1: Set /proc/self/oom_score_adj to 0* *Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 newsock 5 pipe 7 sock 8* *Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after dupping: 3, 3* *Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 port 48152 on XXX.XXX.XXX.123 port 22* *Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version 2.0; client software version OpenSSH_7.4* *Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000* *Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string SSH-2.0-OpenSSH_7.4* *Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode for protocol 2.0* *Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74 [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm: curve25519-sha256 [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm: ecdsa-sha2-nistp256 [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher: chacha20-poly1305 at openssh.com MAC: compression: none [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher: chacha20-poly1305 at openssh.com MAC: compression: none [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 blocks [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: Connection closed by XXX.XXX.XXX.147 port 48152 [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]* *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup* *Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child 110010* *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.* *Feb 7 16:38:30 XXX sshd[110011]: debug1: Set /proc/self/oom_score_adj to 0* *Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 newsock 5 pipe 7 sock 8* *Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after dupping: 3, 3* *Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 port 48154 on XXX.XXX.XXX.123 port 22* *Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version 2.0; client software version OpenSSH_7.4* *Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat OpenSSH* compat 0x04000000* *Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string SSH-2.0-OpenSSH_7.4* *Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode for protocol 2.0* *Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74 [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types: ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm: curve25519-sha256 [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm: ecdsa-sha2-nistp256 [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher: chacha20-poly1305 at openssh.com MAC: compression: none [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher: chacha20-poly1305 at openssh.com MAC: compression: none [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64 dh_need=64 [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_KEX_ECDH_INIT [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 blocks [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS [preauth]* *Feb 7 16:38:30 XXX sshd[110011]: Connection closed by XXX.XXX.XXX.147 port 48154 [preauth]* Thank you! -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Wed Feb 7 22:28:46 2018 From: andreil1 at starlett.lv (Andrei V) Date: Thu, 8 Feb 2018 00:28:46 +0200 Subject: [ovirt-users] oVirt CLI Question Message-ID: Hi, How to force power off, and then launch (after timeout e.g. 20sec) particular VM from bash or Python script? Is 20sec is enough to get oVirt engine updated after forced power off ? What happened with this wiki? Seems like it is deleted or moved. http://wiki.ovirt.org/wiki/CLI#Usage Is this project part of oVirt distro? It looks like in state of active development with last updates 2 months ago. https://github.com/fbacchella/ovirtcmd Thanks ! From i.am.stack at gmail.com Thu Feb 8 00:13:06 2018 From: i.am.stack at gmail.com (~Stack~) Date: Wed, 7 Feb 2018 18:13:06 -0600 Subject: [ovirt-users] Issue with 4.2.1 RC and SSL Message-ID: Greetings, I was having a lot of issues with 4.2 and 95% of them are in the change logs for 4.2.1. Since this is a new build, I just blew everything away and started from scratch with the RC release. The very first thing that I did after the engine-config was to set up my SSL cert. I followed the directions from here: https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/ Logged in the first time to the web interface and everything worked! Great. Install my hosts (also completely fresh installs - Scientific Linux 7 fully updated) and none would finish the install... I can send the full host debug log if you want, however, I'm pretty sure that the problem is because of the SSL somewhere. I've cut/pasted the relevant part. Any advice/help, please? Thanks! ~Stack~ 2018-02-07 16:56:21,697-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD otopi.plugins.ovirt_host_deploy.tune.tuned.Plugin._misc (None) 2018-02-07 16:56:21,698-0600 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id 2018-02-07 16:56:21,698-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None) 2018-02-07 16:56:21,699-0600 DEBUG otopi.transaction transaction._prepare:61 preparing 'File transaction for '/etc/vdsm/vdsm.id'' 2018-02-07 16:56:21,699-0600 DEBUG otopi.filetransaction filetransaction.prepare:183 file '/etc/vdsm/vdsm.id' missing 2018-02-07 16:56:21,705-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None) 2018-02-07 16:56:21,706-0600 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks 2018-02-07 16:56:21,706-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None) 2018-02-07 16:56:21,707-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None) 2018-02-07 16:56:21,707-0600 DEBUG otopi.context context._executeMethod:128 Stage misc METHOD otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc 2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc (None) 2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### Setting up PKI 2018-02-07 16:56:21,709-0600 DEBUG otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:813 execute: ('/usr/bin/openssl', 'req', '-new', '-newkey', 'rsa:2048', '-nodes', '-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), executable='None', cwd='None', env=None 2018-02-07 16:56:21,756-0600 DEBUG otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:863 execute-result: ('/usr/bin/openssl', 'req', '-new', '-newkey', 'rsa:2048', '-nodes', '-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), rc=0 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### Please issue VDSM certificate based on this certificate request 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ***D:MULTI-STRING VDSM_CERTIFICATE_REQUEST --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND -----BEGIN CERTIFICATE REQUEST----- 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMZm 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND eYTWbHKkN+GlQnZ8C6fdk++htyFE+IHSzkhTyTSZdM0bPTdvhomTeCwzNlWBWdU+ 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND PrVB7j/1iksSt6RXDQUWlPDPBNfAa6NtZijEaGuxAe0RpI71G5feZmgVRmtIfrkE 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND 5BjhnCMJW46y9Y7dc2TaXzQqeVj0nkWkHt0v6AVdRWP3OHfOCvqoABny1urStvFT 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND TeAhSBVBUWTaNczBrZBpMXhXrSAe/hhLXMF3VfBV1odOOwb7AeccYkGePMxUOg8+ 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND XMAKdDCn7N0ZC4gSyEAP9mSobvOvNObcfw02NyYdny32/edgPrXKR+ISf4IwVd0d 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND mDonT4W2ROTE/A3M/mkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQCpAKAMv/Vh 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND 0ByC02R3fxtA6b/OZyys+xyIAfAGxo2NSDJDQsw9Gy1QWVtJX5BGsbzuhnNJjhRm 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND 5yx0wrS/k34oEv8Wh+po1fwpI5gG1W9L96Sx+vF/+UXBenJbhEVfir/cOzjmP1Hg 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND TtK5nYnBM7Py5JdnnAPww6jPt6uRypDZqqM8YOct1OEsBr8gPvmQvt5hDGJKqW37 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND xFbad6ILwYIE0DXAu2h9y20Pl3fy4Kb2LQDjltiaQ2IBiHFRUB/H2DOxq0NpH4z7 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND wqU/ai7sXWT/Vq4R6jD+c0V0WP4+VgSkgqPvnSYHwqQUbc9Kh7RwRnVyzLupbWdM 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND Pr+MZ2D1jg27 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND -----END CERTIFICATE REQUEST----- 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%QStart: VDSM_CERTIFICATE_CHAIN 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### Please input VDSM certificate chain that matches certificate request, top is issuer 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ### type '--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--' in own line to mark end, '--=451b80dc-996f-ABORT-9e4f-2b29ef6d1141=--' aborts 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND ***Q:MULTI-STRING VDSM_CERTIFICATE_CHAIN --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- --=451b80dc-996f-ABORT-9e4f-2b29ef6d1141=-- 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine dialog.__logString:204 DIALOG:SEND **%QEnd: VDSM_CERTIFICATE_CHAIN 2018-02-07 16:56:22,765-0600 DEBUG otopi.context context._executeMethod:143 method exception Traceback (most recent call last): File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/context.py", line 133, in _executeMethod method['method']() File "/tmp/ovirt-h7XmTvEqc3/otopi-plugins/ovirt-host-common/vdsm/pki.py", line 241, in _misc '\n\nPlease input VDSM certificate chain that ' File "/tmp/ovirt-h7XmTvEqc3/otopi-plugins/otopi/dialog/machine.py", line 327, in queryMultiString v = self._readline() File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/dialog.py", line 248, in _readline raise IOError(_('End of file')) IOError: End of file 2018-02-07 16:56:22,766-0600 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Misc configuration': End of file -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From xrs444 at xrs444.net Thu Feb 8 05:51:06 2018 From: xrs444 at xrs444.net (Thomas Letherby) Date: Thu, 08 Feb 2018 05:51:06 +0000 Subject: [ovirt-users] Maximum time node can be offline. Message-ID: Hello all, Is there a maximum length of time an Ovirt Node 4.2 based host can be offline in a cluster before it would have issues when powered back on? The reason I ask is in my lab I currently have a three node cluster that works really well, however a lot of the time I only actually need the resources of one host, so to save power I'd like to keep the other two offline until needed. I can always script them to boot once a week or so if I need to. Thanks, Thomas -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperina at redhat.com Thu Feb 8 08:13:14 2018 From: mperina at redhat.com (Martin Perina) Date: Thu, 8 Feb 2018 09:13:14 +0100 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: Hi Terry, from the error message below I'd say that you are either not using correct IP address of iLO5 interface or you haven't enabled remote access to your iLO5 interface. According to [1] iLO5 should fully IPMI compatible. So are you sure that you enabled the remote access to your iLO5 address in iLO5 management? Please consult [1] how to enable everything and use a user with at least Operator privileges. Regards Martin [1] https://support.hpe.com/hpsc/doc/public/display?docId=a00018324en_us On Thu, Feb 8, 2018 at 7:57 AM, Terry hey wrote: > Dear Martin, > > Thank you for helping me. To answer your question, > 1. Does the Test in Edit fence agent dialog work?? > Ans: it shows that "Test failed: Internal JSON-RPC error" > > Regardless the fail result, i press "OK" to enable power management. There > are four event log appear in "Events" > ********************************The follwing are the log in > "Event""******************************** > Host host01 configuration was updated by admin at internal-authz. > Kdump integration is enabled for host hostv01, but kdump is not configured > properly on host. > Health check on Host host01 indicates that future attempts to Stop this > host using Power-Management are expected to fail. > Health check on Host host01 indicates that future attempts to Start this > host using Power-Management are expected to fail. > > 2. If not could you please try to install fence-agents-all package on > different host and execute? > Ans: It just shows "Connection timed out". > > So, does it means that it is not support iLo5 now or i configure wrongly? > > Regards, > Terry > > 2018-02-02 15:46 GMT+08:00 Martin Perina : > >> >> >> On Fri, Feb 2, 2018 at 5:40 AM, Terry hey wrote: >> >>> Dear Martin, >>> >>> Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to setup >>> oVirt Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. Therefore, i >>> would like to ask whether oVirt power management support iLO5 or not. >>> >> >> ?We don't have any hardware with iLO5 available, but there is a good >> chance that it will be compatible with iLO4. Have you tried to setup your >> server with iLO4? Does the Test in Edit fence agent dialog work?? If not >> could you please try to install fence-agents-all package on different host >> and execute following: >> >> fence_ilo4 -a -l -p -v -o status >> >> and share the output? >> >> Thanks >> >> Martin >> >> >>> If not, do you have any idea to setup power management with HP G10? >>> >>> Regards, >>> Terry >>> >>> 2018-02-01 16:21 GMT+08:00 Martin Perina : >>> >>>> >>>> >>>> On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < >>>> lorenzetto.luca at gmail.com> wrote: >>>> >>>>> Hi, >>>>> >>>>> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. Try >>>>> using the standard ipmi. >>>>> >>>> >>>> ?It's not just an alias, ilo3/ilo4 also have different defaults than >>>> ipmilan. For example if you use ilo4, then by default following is used: >>>> >>>> ? >>>> >>>> ?lanplus=1 >>>> power_wait=4 >>>> >>>> ?So I recommend to start with ilo4 and add any necessary custom options >>>> into Options field. If you need some custom >>>> options, could you please share them with us? It would be very helpful >>>> for us, if needed we could introduce ilo5 with >>>> different defaults then ilo4 >>>> >>>> Thanks >>>> >>>> Martin >>>> >>>> >>>>> Luca >>>>> >>>>> >>>>> >>>>> Il 31 gen 2018 11:14 PM, "Terry hey" ha >>>>> scritto: >>>>> >>>>>> Dear all, >>>>>> Did oVirt 4.2 Power management support iLO5 as i could not see iLO5 >>>>>> option in Power Management. >>>>>> >>>>>> Regards >>>>>> Terry >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>>> >>>> -- >>>> Martin Perina >>>> Associate Manager, Software Engineering >>>> Red Hat Czech s.r.o. >>>> >>> >>> >> >> >> -- >> Martin Perina >> Associate Manager, Software Engineering >> Red Hat Czech s.r.o. >> > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Thu Feb 8 08:34:27 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Thu, 8 Feb 2018 09:34:27 +0100 Subject: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated In-Reply-To: References: Message-ID: On Tue, Feb 6, 2018 at 9:33 PM, Maor Lipchuk wrote: [cut] >> What i need is that informations about vms is replicated to the remote >> site with disk. >> In an older test i had the issue that disks were replicated to remote >> site, but vm configuration not! >> I've found disks in the "Disk" tab of storage domain, but nothing on VM >> Import. > > > > Can you reproduce it and attach the logs of the setup before the disaster > and after the recovery? > That could happen in case of new created VMs and Templates which were not > yet updated in the OVF_STORE disk, since the OVF_STORE update process was > not running yet before the disaster. > Since the time of a disaster can't be anticipated, gaps like this might > happen. > I haven't tried the recovery yet using ansible. It was an experiment of possible procedure to be performed manually and was on 4.0. I asked about this unexpected behavior and Yaniv returned me that was due to OVF_STORE not updated and that in 4.1 there is an api call that updates OVF_STORE on demand. I'm creating a new setup today and i'll test again and check if i still hit the issue. Anyway if the problem persist i think that engine, for DR purposes, should upgrade the OVF_STORE as soon as possible when a new vm is created or has disks added. [cut] >> >> Ok, but if i keep master storage domain on a non replicate volume, do >> i require this function? > > > Basically it should also fail on VM/Template registration in oVirt 4.1 since > there are also other functionalities like mapping of OVF attributes which > was added on VM/Templates registeration. > What do you mean? That i could fail to import any VM/Template? In what case? Another question: we have 2 DCs in main site, do we require to have also 2 DCs in recovery site o we can import all the storage domains in a single DC on recovery site? There could be uuid collisions or similar? Thank you so much for your replies, Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From pkotas at redhat.com Thu Feb 8 08:41:21 2018 From: pkotas at redhat.com (Petr Kotas) Date: Thu, 8 Feb 2018 09:41:21 +0100 Subject: [ovirt-users] After realizing HA migration, the virtual machine can still get the virtual machine information by using the "vdsm-client host getVMList" instruction on the host before the migration. In-Reply-To: <63fe8e5f.ae76.1616ae7955c.Coremail.pym0914@163.com> References: <22ca0fd1.aee0.1614c0a03df.Coremail.pym0914@163.com> <42d98cc1.163e.1614ef43bd6.Coremail.pym0914@163.com> <6c18742a.bb33.1615142da93.Coremail.pym0914@163.com> <63fe8e5f.ae76.1616ae7955c.Coremail.pym0914@163.com> Message-ID: Hi Pym, the feature is know in testing. I am not sure when it will be released, but I hope for sooner date. Petr On Tue, Feb 6, 2018 at 12:36 PM, Pym wrote: > Thank you very much for your help, so is this patch released now? Where > can I get this patch? > > > > > > > At 2018-02-05 20:52:04, "Petr Kotas" wrote: > > Hi, > > I have experimented on the issue and figured out the reason for the > original issue. > > You are right, that the vm1 is not properly stopped. This is due to the > known issue in the graceful shutdown introduced in the ovirt 4.2. > The vm on the host in shutdown are killed, but are not marked as stopped. > This results in the behavior you have observed. > > Luckily, the patch is already done and present in the latest ovirt. > However, be ware that gracefully shutting down the host, will result in > graceful shutdown of > the VMs. This result in engine not migrating them, since they have been > terminated gracefully. > > Hope this helps. > > Best, > Petr > > > On Fri, Feb 2, 2018 at 6:00 PM, Simone Tiraboschi > wrote: > >> >> >> On Thu, Feb 1, 2018 at 1:06 PM, Pym wrote: >> >>> The environment on my side may be different from the link. My VM1 can be >>> used normally after it is started on host2, but there is still information >>> left on host1 that is not cleaned up. >>> >>> Only the interface and background can still get the information of vm1 >>> on host1, but the vm2 has been successfully started on host2, with the HA >>> function. >>> >>> I would like to ask a question, whether the UUID of the virtual machine >>> is stored in the database or where is it maintained? Is it not successfully >>> deleted after using the HA function? >>> >>> >> I just encounter a similar behavior: >> after a reboot of the host 'vdsm-client Host getVMFullList' is still >> reporting an old VM that is not visible with 'virsh -r list --all'. >> >> I filed a bug to track it: >> https://bugzilla.redhat.com/show_bug.cgi?id=1541479 >> >> >> >>> >>> >>> >>> >>> 2018-02-01 16:12:16?"Simone Tiraboschi" ? >>> >>> >>> >>> On Thu, Feb 1, 2018 at 2:21 AM, Pym wrote: >>> >>>> >>>> I checked the vm1, he is keep up state, and can be used, but on host1 >>>> has after shutdown is a suspended vm1, this cannot be used, this is the >>>> problem now. >>>> >>>> In host1, you can get the information of vm1 using the "vdsm-client >>>> Host getVMList", but you can't get the vm1 information using the "virsh >>>> list". >>>> >>>> >>> Maybe a side effect of https://bugzilla.redhat.com >>> /show_bug.cgi?id=1505399 >>> >>> Arik? >>> >>> >>> >>>> >>>> >>>> >>>> 2018-02-01 07:16:37?"Simone Tiraboschi" ? >>>> >>>> >>>> >>>> On Wed, Jan 31, 2018 at 12:46 PM, Pym wrote: >>>> >>>>> Hi: >>>>> >>>>> The current environment is as follows: >>>>> >>>>> Ovirt-engine version 4.2.0 is the source code compilation and >>>>> installation. Add two hosts, host1 and host2, respectively. At host1, a >>>>> virtual machine is created on vm1, and a vm2 is created on host2 and HA is >>>>> configured. >>>>> >>>>> Operation steps: >>>>> >>>>> Use the shutdown -r command on host1. Vm1 successfully migrated to >>>>> host2. >>>>> When host1 is restarted, the following situation occurs: >>>>> >>>>> The state of the vm2 will be shown in two images, switching from up >>>>> and pause. >>>>> >>>>> When I perform the "vdsm-client Host getVMList" in host1, I will get >>>>> the information of vm1. When I execute the "vdsm-client Host getVMList" in >>>>> host2, I will get the information of vm1 and vm2. >>>>> When I do "virsh list" in host1, there is no virtual machine >>>>> information. When I execute "virsh list" at host2, I will get information >>>>> of vm1 and vm2. >>>>> >>>>> How to solve this problem? >>>>> >>>>> Is it the case that vm1 did not remove the information on host1 during >>>>> the migration, or any other reason? >>>>> >>>> >>>> Did you also check if your vms always remained up? >>>> In 4.2 we have libvirt-guests service on the hosts which tries to >>>> properly shutdown the running VMs on host shutdown. >>>> >>>> >>>>> >>>>> Thank you. >>>>> >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>>> >>>> >>>> >>> >>> >>> >>> >>> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 35900 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 34515 bytes Desc: not available URL: From omachace at redhat.com Thu Feb 8 08:44:41 2018 From: omachace at redhat.com (Ondra Machacek) Date: Thu, 8 Feb 2018 09:44:41 +0100 Subject: [ovirt-users] oVirt CLI Question In-Reply-To: References: Message-ID: On 02/07/2018 11:28 PM, Andrei V wrote: > Hi, > > How to force power off, and then launch (after timeout e.g. 20sec) > particular VM from bash or Python script? Please check the following Python script: https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/stop_vm.py It stops the VM and wait until it's in DOWN state. Then there is a script to start the VM: https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/start_vm.py > > Is 20sec is enough to get oVirt engine updated after forced power off > > What happened with this wiki? Seems like it is deleted or moved. > http://wiki.ovirt.org/wiki/CLI#Usage CLI was deprecated, and is not available anymore, since 4.0 I think. You can use Ansible modules or Python SDK. > > Is this project part of oVirt distro? It looks like in state of active > development with last updates 2 months ago. > https://github.com/fbacchella/ovirtcmd No, it isn't part of oVirt distribution. > > Thanks ! > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From mburman at redhat.com Thu Feb 8 08:54:18 2018 From: mburman at redhat.com (Michael Burman) Date: Thu, 8 Feb 2018 10:54:18 +0200 Subject: [ovirt-users] Clear name_server table entries In-Reply-To: <1518024291.3537.5.camel@eurotux.com> References: <1518024291.3537.5.camel@eurotux.com> Message-ID: Hi Yes you may delete the entries, make sure you are not deleting name_server of hosts that already running in your engine(if you have such). Deleting the multiple entries in the DB + removing the duplicated name servers in /etc/resolv.conf should work around this bug, which was decided to close as WONTFIX. Just re-add the the server to engine. Cheers) On Wed, Feb 7, 2018 at 7:24 PM, Carlos Rodrigues wrote: > Hi, > > I'm getting the following problem: > > https://bugzilla.redhat.com/show_bug.cgi?id=1530944#c3 > > and after fix DNS entries no /etc/resolv.conf on host, i have to many > entries on name_server table: > > engine=# select count(*) from name_server; > count > ------- > 31401 > (1 row) > > I would like to know if may i delete this entries? > > Best regards, > > -- > Carlos Rodrigues > > Engenheiro de Software S?nior > > Eurotux Inform?tica, S.A. | www.eurotux.com > (t) +351 253 680 300 (m) +351 911 926 110 > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Thu Feb 8 08:59:28 2018 From: mburman at redhat.com (Michael Burman) Date: Thu, 8 Feb 2018 10:59:28 +0200 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <20180207130005.1BFD2E12B8@smtp01.mail.de> References: <20180207130005.1BFD2E12B8@smtp01.mail.de> Message-ID: Not sure i understand from which version you trying to upgrade and what is the exact upgrade flow, if i got it correctly, it is seems that you upgraded the hosts to 4.2, but engine still 4.1? What exactly the upgrade steps, please explain the flow., what have you done after upgrading the hosts? to what version? Cheers) On Wed, Feb 7, 2018 at 3:00 PM, wrote: > Hi, > Thanks a lot for your answer. > > I applied some updates at node level, but I forgot to upgrade the engine ! > > When I try to do so I get a strange error : "Cluster PROD is at version > 4.2 which is not supported by this upgrade flow. Please fix it before > upgrading." > > Here are the installed packets on my nodes : > python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 > ovirt-imageio-common-1.2.0-1.el7.centos.noarch > ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch > ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch > ovirt-setup-lib-1.1.4-1.el7.centos.noarch > ovirt-release42-4.2.0-1.el7.centos.noarch > ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch > ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64 > ovirt-host-4.2.0-1.el7.centos.x86_64 > ovirt-host-deploy-1.7.0-1.el7.centos.noarch > ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch > ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch > ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch > ovirt-vmconsole-host-1.0.4-1.el7.noarch > cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch > ovirt-vmconsole-1.0.4-1.el7.noarch > > What I am supposed to do ? I see no newer packages available. > > Regards > > > > Le 07-Feb-2018 13:23:43 +0100, mburman at redhat.com a ?crit: > > Hi > > This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - > https://bugzilla.redhat.com/show_bug.cgi?id=1528906 > > The no default route bug was fixed in - https://bugzilla.redhat.com/ > show_bug.cgi?id=1477589 > > Thanks, > > On Wed, Feb 7, 2018 at 1:15 PM, wrote: > >> >> Hi, >> I am experiencing a new problem : when I try to modify something in the >> network setup on the second node (added to the cluster after installing the >> engine on the other one) using the Engine GUI, I get the following error >> when validating : >> >> must match "^\b((25[0-5]|2[0-4]\d|[01]\d\ >> d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" >> Attribut : ipConfiguration.iPv4Addresses[0].gateway >> >> Moreover, on the general status of ther server, I have a "Host has no >> default route" alert. >> >> The ovirtmgmt network has a defined gateway of course, and the storage >> network has none because it is not required. Both server have the same >> setup, with different addresses of course :-) >> >> I have not been able to find anything useful in the logs. >> >> Is this a bug or am I doing something wrong ? >> >> Regards >> >> ------------------------------ >> FreeMail powered by mail.fr >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From usual.man at gmail.com Mon Feb 5 19:36:44 2018 From: usual.man at gmail.com (George Sitov) Date: Mon, 5 Feb 2018 21:36:44 +0200 Subject: [ovirt-users] ovn problem - Failed to communicate with the external provider, see log for additional details. Message-ID: Hello! I have a problem wiith configure external provider. Edit config file - ovirt-provider-ovn.conf, set ssl parameters. systemctl start ovirt-provider-ovn start without problem. In external proveder in web gui i set: Provider URL: https://ovirt.mydomain.com:9696 Username: admin at internal Authentication URL: https://ovirt.mydomain.com:35357/v2.0/ But after i press test button i see error - Failed to communicate with the external provider, see log for additional details. /var/log/ovirt-engine/engine.log: 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.provider.network.openstack.BaseNetworkProviderProxy] (default task-29) [69fa312e-6e2e-4925-b081-385beba18a6a] Bad Gateway (OpenStack response error code: 502) 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand] (default task-29) [69fa312e-6e2e-4925-b081-385beba18a6a] Command 'org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand' failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050) In /var/log/ovirt-provider-ovn.log: 2018-02-05 21:33:55,510 Starting new HTTPS connection (1): ovirt.astrecdata.com 2018-02-05 21:33:55,516 [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) Traceback (most recent call last): File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line 126, in _handle_request method, path_parts, content) File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", line 176, in handle_request return self.call_response_handler(handler, content, parameters) File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in call_response_handler return response_handler(content, parameters) File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", line 60, in post_tokens user_password=user_password) File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in create_token return auth.core.plugin.create_token(user_at_domain, user_password) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line 48, in create_token timeout=self._timeout()) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 62, in create_token username, password, engine_url, ca_file, timeout) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 53, in wrapper response = func(*args, **kwargs) File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line 46, in wrapper raise BadGateway(e) BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579) Whan i do wrong ? Please help. ---- With best regards Georgii. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marshall at marshallholdingco.com Wed Feb 7 03:16:54 2018 From: Marshall at marshallholdingco.com (Marshall Mitchell) Date: Wed, 7 Feb 2018 03:16:54 +0000 Subject: [ovirt-users] Spice Newb Message-ID: I'm attempting to get my first install of oVirt going in full swing. I have all the hosts installed and an engine running. All is smooth. I'm not trying to connect to the Spice console with my Remote Viewer and I have no clue how to figure out what port I should be connecting too. I've been all over the web via google looking for a process to install / configure / verify spice is operational, but I've not been lucky. How do I go about connecting / finding the port numbers for my VM's? I did open the firewall range required. I appreciate the help. -Marshall -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Thu Feb 8 06:57:16 2018 From: recreationh at gmail.com (Terry hey) Date: Thu, 8 Feb 2018 14:57:16 +0800 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: Dear Martin, Thank you for helping me. To answer your question, 1. Does the Test in Edit fence agent dialog work?? Ans: it shows that "Test failed: Internal JSON-RPC error" Regardless the fail result, i press "OK" to enable power management. There are four event log appear in "Events" ********************************The follwing are the log in "Event""******************************** Host host01 configuration was updated by admin at internal-authz. Kdump integration is enabled for host hostv01, but kdump is not configured properly on host. Health check on Host host01 indicates that future attempts to Stop this host using Power-Management are expected to fail. Health check on Host host01 indicates that future attempts to Start this host using Power-Management are expected to fail. 2. If not could you please try to install fence-agents-all package on different host and execute? Ans: It just shows "Connection timed out". So, does it means that it is not support iLo5 now or i configure wrongly? Regards, Terry 2018-02-02 15:46 GMT+08:00 Martin Perina : > > > On Fri, Feb 2, 2018 at 5:40 AM, Terry hey wrote: > >> Dear Martin, >> >> Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to setup >> oVirt Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. Therefore, i >> would like to ask whether oVirt power management support iLO5 or not. >> > > ?We don't have any hardware with iLO5 available, but there is a good > chance that it will be compatible with iLO4. Have you tried to setup your > server with iLO4? Does the Test in Edit fence agent dialog work?? If not > could you please try to install fence-agents-all package on different host > and execute following: > > fence_ilo4 -a -l -p -v -o status > > and share the output? > > Thanks > > Martin > > >> If not, do you have any idea to setup power management with HP G10? >> >> Regards, >> Terry >> >> 2018-02-01 16:21 GMT+08:00 Martin Perina : >> >>> >>> >>> On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < >>> lorenzetto.luca at gmail.com> wrote: >>> >>>> Hi, >>>> >>>> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. Try >>>> using the standard ipmi. >>>> >>> >>> ?It's not just an alias, ilo3/ilo4 also have different defaults than >>> ipmilan. For example if you use ilo4, then by default following is used: >>> >>> ? >>> >>> ?lanplus=1 >>> power_wait=4 >>> >>> ?So I recommend to start with ilo4 and add any necessary custom options >>> into Options field. If you need some custom >>> options, could you please share them with us? It would be very helpful >>> for us, if needed we could introduce ilo5 with >>> different defaults then ilo4 >>> >>> Thanks >>> >>> Martin >>> >>> >>>> Luca >>>> >>>> >>>> >>>> Il 31 gen 2018 11:14 PM, "Terry hey" ha >>>> scritto: >>>> >>>>> Dear all, >>>>> Did oVirt 4.2 Power management support iLO5 as i could not see iLO5 >>>>> option in Power Management. >>>>> >>>>> Regards >>>>> Terry >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> >>> -- >>> Martin Perina >>> Associate Manager, Software Engineering >>> Red Hat Czech s.r.o. >>> >> >> > > > -- > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Thu Feb 8 07:41:01 2018 From: rightkicktech at gmail.com (Alex K) Date: Thu, 8 Feb 2018 09:41:01 +0200 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: Hi Shani, Didn't notice that. I am attaching later vdsm logs. Thanx, Alex On Wed, Feb 7, 2018 at 5:31 PM, Shani Leviim wrote: > Hi Alex, > Sorry for the mail's delay. > > From a brief look at your logs, I've noticed that the error you've got at > the engine's log was logged at 2018-02-03 00:22:56, > while your vdsm's log ends at 2018-02-03 00:01:01. > Is there a way you can reproduce a fuller vdsm log? > > > *Regards,* > > *Shani Leviim* > > On Sat, Feb 3, 2018 at 5:41 PM, Alex K wrote: > >> Attaching vdm log from host that trigerred the error, where the Vm that >> was being cloned was running at that time. >> >> thanx, >> Alex >> >> On Sat, Feb 3, 2018 at 5:20 PM, Yaniv Kaul wrote: >> >>> >>> >>> On Feb 3, 2018 3:24 PM, "Alex K" wrote: >>> >>> Hi All, >>> >>> I have reproduced the backups failure. The VM that failed is named >>> Win-FileServer and is a Windows 2016 server 64bit with 300GB of disk. >>> During the cloning step the VM went unresponsive and I had to stop/start >>> it. >>> I am attaching the logs.I have another VM with same OS (named DC-Server >>> within the logs) but with smaller disk (60GB) which does not give any error >>> when it is cloned. >>> I see a line: >>> >>> EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, >>> Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VDSM >>> v2.sitedomain command SnapshotVDS failed: Message timeout which can be >>> caused by communication issues >>> >>> >>> I suggest adding relevant vdsm.log as well. >>> Y. >>> >>> >>> I appreciate any advise why I am facing such issue with the backups. >>> >>> thanx, >>> Alex >>> >>> On Tue, Jan 30, 2018 at 12:49 AM, Alex K >>> wrote: >>> >>>> Ok. I will reproduce and collect logs. >>>> >>>> Thanx, >>>> Alex >>>> >>>> On Jan 29, 2018 20:21, "Mahdi Adnan" wrote: >>>> >>>> I have Windows VMs, both client and server. >>>> if you provide the engine.log file we might have a look at it. >>>> >>>> >>>> -- >>>> >>>> Respectfully >>>> *Mahdi A. Mahdi* >>>> >>>> ------------------------------ >>>> *From:* Alex K >>>> *Sent:* Monday, January 29, 2018 5:40 PM >>>> *To:* Mahdi Adnan >>>> *Cc:* users >>>> *Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM >>>> >>>> Hi, >>>> >>>> I have observed this logged at host when the issue occurs: >>>> >>>> VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer >>>> >>>> or >>>> >>>> VDSM host.domain command GetStatsVDS failed: Connection reset by peer >>>> >>>> At engine logs have not been able to correlate. >>>> >>>> Are you hosting Windows 2016 server and Windows 10 VMs? >>>> The weird is that I have same setup on other clusters with no issues. >>>> >>>> Thanx, >>>> Alex >>>> >>>> On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan >>>> wrote: >>>> >>>> Hi, >>>> >>>> We have a cluster of 17 nodes, backed by GlusterFS storage, and using >>>> this same script for backup. >>>> we have no issues with it so far. >>>> have you checked engine log file ? >>>> >>>> >>>> -- >>>> >>>> Respectfully >>>> *Mahdi A. Mahdi* >>>> >>>> ------------------------------ >>>> *From:* users-bounces at ovirt.org on behalf of >>>> Alex K >>>> *Sent:* Wednesday, January 24, 2018 4:18 PM >>>> *To:* users >>>> *Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM >>>> >>>> Hi all, >>>> >>>> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup >>>> on top glusterfs. >>>> On some VMs (especially one Windows server 2016 64bit with 500 GB of >>>> disk). Guest agents are installed at VMs. i almost always observe that >>>> during the backup of the VM the VM is rendered unresponsive (dashboard >>>> shows a question mark at the VM status and VM does not respond to ping or >>>> to anything). >>>> >>>> For scheduled backups I use: >>>> >>>> https://github.com/wefixit-AT/oVirtBackup >>>> >>>> The script does the following: >>>> >>>> 1. snapshot VM (this is done ok without any failure) >>>> >>>> 2. Clone snapshot (this steps renders the VM unresponsive) >>>> >>>> 3. Export Clone >>>> >>>> 4. Delete clone >>>> >>>> 5. Delete snapshot >>>> >>>> >>>> Do you have any similar experience? Any suggestions to address this? >>>> >>>> I have never seen such issue with hosted Linux VMs. >>>> >>>> The cluster has enough storage to accommodate the clone. >>>> >>>> >>>> Thanx, >>>> >>>> Alex >>>> >>>> >>>> >>>> >>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: vdsm.log.5 Type: application/octet-stream Size: 5920945 bytes Desc: not available URL: From cmar at eurotux.com Thu Feb 8 09:22:38 2018 From: cmar at eurotux.com (Carlos Rodrigues) Date: Thu, 08 Feb 2018 09:22:38 +0000 Subject: [ovirt-users] Clear name_server table entries In-Reply-To: References: <1518024291.3537.5.camel@eurotux.com> Message-ID: <1518081758.3423.2.camel@eurotux.com> Hi, Many thanks. Cheers On Thu, 2018-02-08 at 10:54 +0200, Michael Burman wrote: > Hi > > Yes you may delete the entries, make sure you are not deleting > name_server of hosts that already running in your engine(if you have > such). > Deleting the multiple entries in the DB + removing the duplicated > name servers in /etc/resolv.conf should work around this bug, which > was decided to close as WONTFIX. Just re-add the the server to > engine. > > Cheers) > > On Wed, Feb 7, 2018 at 7:24 PM, Carlos Rodrigues > wrote: > > Hi, > > > > I'm getting the following problem: > > > > https://bugzilla.redhat.com/show_bug.cgi?id=1530944#c3 > > > > and after fix DNS entries no /etc/resolv.conf on host, i have to > > many > > entries on name_server table: > > > > engine=# select count(*) from name_server; > > count > > ------- > > 31401 > > (1 row) > > > > I would like to know if may i delete this entries? > > > > Best regards, > > > > -- > > Carlos Rodrigues > > > > Engenheiro de Software S?nior > > > > Eurotux Inform?tica, S.A. | www.eurotux.com > > (t) +351 253 680 300 (m) +351 911 926 110 > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > -- Carlos Rodrigues Engenheiro de Software S?nior Eurotux Inform?tica, S.A. | www.eurotux.com (t) +351 253 680 300 (m) +351 911 926 110 From gianluca.cecchi at gmail.com Thu Feb 8 09:43:07 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 8 Feb 2018 10:43:07 +0100 Subject: [ovirt-users] Spice Newb In-Reply-To: References: Message-ID: On Wed, Feb 7, 2018 at 4:16 AM, Marshall Mitchell < Marshall at marshallholdingco.com> wrote: > I?m attempting to get my first install of oVirt going in full swing. I > have all the hosts installed and an engine running. All is smooth. I?m not > trying to connect to the Spice console with my Remote Viewer and I have no > clue how to figure out what port I should be connecting too. I?ve been all > over the web via google looking for a process to install / configure / > verify spice is operational, but I?ve not been lucky. How do I go about > connecting / finding the port numbers for my VM?s? I did open the firewall > range required. I appreciate the help. > > > > -Marshall > > I don't know if I correctly understand your question. It is the web admin gui or the portal gui that takes care of using the correct host ip (that is where the VM is running at that moment) and the correct port on it to connect to the vnc/spice console. At host side the qemu-kvm process for a particular VM will have this snip in its command line (in my case it is a VM with console configured as Spice+VNC): . . . -vnc 192.168.50.21:0,password -k en-us -spice tls-port=5901,addr=192.168.50.21,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on . . . Take in consideration that if you live migrate this VM, it will seamlessly change the host ip in the new qemu-kvm process on destination host and also the port, if the previous one is already taken. In recent releases of oVirt your spice client window will remain up and the same both during and after the live migration of the vm, even if the material host to which it connects changes. HIH, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From apgriffiths79 at gmail.com Thu Feb 8 10:04:27 2018 From: apgriffiths79 at gmail.com (Alan Griffiths) Date: Thu, 8 Feb 2018 10:04:27 +0000 Subject: [ovirt-users] Engine AAA LDAP startTLS Protocol Issue Message-ID: Hi, Trying to configure Engine to authenticate against OpenLDAP and I seem to be hitting a protocol bug. Attempts to test the login during the setup fail with 2018-02-07 12:27:37,872Z WARNING Exception: The connection reader was unable to successfully complete TLS negotiation: SSLException(message='Received fatal alert: protocol_version', trace='getSSLException(Alerts.java:208) / getSSLException(Alerts.java:154) / recvAlert(SSLSocketImpl.java:2033) / readRecord(SSLSocketImpl.java:1135) / performInitialHandshake(SSLSocketImpl.java:1385) / startHandshake(SSLSocketImpl.java:1413) / startHandshake(SSLSocketImpl.java:1397) / run(LDAPConnectionReader.java:301)', revision=0) Running a packet trace I see that it's trying to negotiate with TLS 1.0, but my LDAP server only support TLS 1.2. This looks like a regression as it works fine in 4.0. I see the issue in both 4.1 and 4.2 4.1.9.1 4.2.0.2 Should I submit a bug? Thanks, Alan From spfma.tech at e.mail.fr Thu Feb 8 10:10:25 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 08 Feb 2018 11:10:25 +0100 Subject: [ovirt-users] Network configuration validation error In-Reply-To: References: Message-ID: <20180208101025.3FC35E2264@smtp01.mail.de> Hi, I installed my nodes with the latest packages available in repositories. Then I installed the self-hosted engine on one of them, created the cluster and added the second node. When I encountered these network config problems and saw your answer, I thougt it was time to update some packages to get corrections. So I just runned a "yum update" on the nodes and rebooted them. Then I put the engine in global maintenance mode and tried "hosted-engine --upgrade-appliance" on the node hosting it. But it failed returning me this cluster version error. The engine's about windows says "Software Version:4.2.0.2-1.el7.centos" And for the hosts : OS Version:RHEL - 7 - 4.1708.el7.centos OS Description:CentOS Linux 7 (Core) Kernel Version:3.10.0 - 693.17.1.el7.x86_64 KVM Version:2.9.0 - 16.el7_4.13.1 LIBVIRT Version:libvirt-3.2.0-14.el7_4.7 VDSM Version:vdsm-4.20.9.3-1.el7.centos SPICE Version:0.12.8 - 2.el7.1 GlusterFS Version:[N/A] CEPH Version:librbd1-0.94.5-2.el7 Regards Le 08-Feb-2018 09:59:32 +0100, mburman at redhat.com a crit: Not sure i understand from which version you trying to upgrade and what is the exact upgrade flow, if i got it correctly, it is seems that you upgraded the hosts to 4.2, but engine still 4.1? What exactly the upgrade steps, please explain the flow., what have you done after upgrading the hosts? to what version? Cheers) On Wed, Feb 7, 2018 at 3:00 PM, wrote: Hi, Thanks a lot for your answer. I applied some updates at node level, but I forgot to upgrade the engine ! When I try to do so I get a strange error : "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." Here are the installed packets on my nodes : python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 ovirt-imageio-common-1.2.0-1.el7.centos.noarch ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.0-1.el7.centos.noarch ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64 ovirt-host-4.2.0-1.el7.centos.x86_64 ovirt-host-deploy-1.7.0-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch ovirt-vmconsole-host-1.0.4-1.el7.noarch cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.noarch What I am supposed to do ? I see no newer packages available. Regards Le 07-Feb-2018 13:23:43 +0100, mburman at redhat.com a crit: Hi This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - https://bugzilla.redhat.com/show_bug.cgi?id=1528906 The no default route bug was fixed in - https://bugzilla.redhat.com/show_bug.cgi?id=1477589 Thanks, On Wed, Feb 7, 2018 at 1:15 PM, wrote: Hi, I am experiencing a new problem : when I try to modify something in the network setup on the second node (added to the cluster after installing the engine on the other one) using the Engine GUI, I get the following error when validating : must match "^\b((25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" Attribut : ipConfiguration.iPv4Addresses[0].gateway Moreover, on the general status of ther server, I have a "Host has no default route" alert. The ovirtmgmt network has a defined gateway of course, and the storage network has none because it is not required. Both server have the same setup, with different addresses of course :-) I have not been able to find anything useful in the logs. Is this a bug or am I doing something wrong ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Thu Feb 8 10:17:03 2018 From: mburman at redhat.com (Michael Burman) Date: Thu, 8 Feb 2018 12:17:03 +0200 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <20180208101025.3FC35E2264@smtp01.mail.de> References: <20180208101025.3FC35E2264@smtp01.mail.de> Message-ID: Thanks, I'm not so familiar with the hosted-engine --upgrade-appliance flow, can someone from the list that is familiar with such flow assist? On Thu, Feb 8, 2018 at 12:10 PM, wrote: > Hi, > > I installed my nodes with the latest packages available in repositories. > Then I installed the self-hosted engine on one of them, created the cluster > and added the second node. > > When I encountered these network config problems and saw your answer, I > thougt it was time to update some packages to get corrections. > > So I just runned a "yum update" on the nodes and rebooted them. > > Then I put the engine in global maintenance mode and tried "hosted-engine > --upgrade-appliance" on the node hosting it. > > But it failed returning me this cluster version error. > > The engine's about windows says "Software Version:4.2.0.2-1.el7.centos" > > And for the hosts : > OS Version:RHEL - 7 - 4.1708.el7.centos > OS Description:CentOS Linux 7 (Core) > Kernel Version:3.10.0 - 693.17.1.el7.x86_64 > KVM Version:2.9.0 - 16.el7_4.13.1 > LIBVIRT Version:libvirt-3.2.0-14.el7_4.7 > VDSM Version:vdsm-4.20.9.3-1.el7.centos > SPICE Version:0.12.8 - 2.el7.1 > GlusterFS Version:[N/A] > CEPH Version:librbd1-0.94.5-2.el7 > > Regards > > > > > Le 08-Feb-2018 09:59:32 +0100, mburman at redhat.com a ?crit: > > Not sure i understand from which version you trying to upgrade and what is > the exact upgrade flow, if i got it correctly, it is seems that you > upgraded the hosts to 4.2, but engine still 4.1? > What exactly the upgrade steps, please explain the flow., what have you > done after upgrading the hosts? to what version? > > Cheers) > > On Wed, Feb 7, 2018 at 3:00 PM, wrote: > >> Hi, >> Thanks a lot for your answer. >> >> I applied some updates at node level, but I forgot to upgrade the engine ! >> >> When I try to do so I get a strange error : "Cluster PROD is at version >> 4.2 which is not supported by this upgrade flow. Please fix it before >> upgrading." >> >> Here are the installed packets on my nodes : >> python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 >> ovirt-imageio-common-1.2.0-1.el7.centos.noarch >> ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch >> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >> ovirt-release42-4.2.0-1.el7.centos.noarch >> ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch >> ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64 >> ovirt-host-4.2.0-1.el7.centos.x86_64 >> ovirt-host-deploy-1.7.0-1.el7.centos.noarch >> ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch >> ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch >> ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch >> ovirt-vmconsole-host-1.0.4-1.el7.noarch >> cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch >> ovirt-vmconsole-1.0.4-1.el7.noarch >> >> What I am supposed to do ? I see no newer packages available. >> >> Regards >> >> >> >> Le 07-Feb-2018 13:23:43 +0100, mburman at redhat.com a ?crit: >> >> Hi >> >> This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - >> https://bugzilla.redhat.com/show_bug.cgi?id=1528906 >> >> The no default route bug was fixed in - https://bugzilla.redhat.com/ >> show_bug.cgi?id=1477589 >> >> Thanks, >> >> On Wed, Feb 7, 2018 at 1:15 PM, wrote: >> >>> >>> Hi, >>> I am experiencing a new problem : when I try to modify something in the >>> network setup on the second node (added to the cluster after installing the >>> engine on the other one) using the Engine GUI, I get the following error >>> when validating : >>> >>> must match "^\b((25[0-5]|2[0-4]\d|[01]\d\ >>> d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" >>> Attribut : ipConfiguration.iPv4Addresses[0].gateway >>> >>> Moreover, on the general status of ther server, I have a "Host has no >>> default route" alert. >>> >>> The ovirtmgmt network has a defined gateway of course, and the storage >>> network has none because it is not required. Both server have the same >>> setup, with different addresses of course :-) >>> >>> I have not been able to find anything useful in the logs. >>> >>> Is this a bug or am I doing something wrong ? >>> >>> Regards >>> >>> ------------------------------ >>> FreeMail powered by mail.fr >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> >> Michael Burman >> >> Senior Quality engineer - rhv network - redhat israel >> >> Red Hat >> >> >> >> mburman at redhat.com M: 0545355725 IM: mburman >> >> >> >> ------------------------------ >> FreeMail powered by mail.fr >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Thu Feb 8 10:19:33 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Thu, 08 Feb 2018 11:19:33 +0100 Subject: [ovirt-users] Network configuration validation error In-Reply-To: References: Message-ID: <20180208101933.D6E2FE2265@smtp01.mail.de> In fact I don't know what flow should be followed. I have found several documentations, but most deal with older version so I am not sure I am doing right. Is there a working documented procedure somewhere ? Le 08-Feb-2018 11:17:06 +0100, mburman at redhat.com a crit: Thanks, I'm not so familiar with the hosted-engine --upgrade-appliance flow, can someone from the list that is familiar with such flow assist? On Thu, Feb 8, 2018 at 12:10 PM, wrote: Hi, I installed my nodes with the latest packages available in repositories. Then I installed the self-hosted engine on one of them, created the cluster and added the second node. When I encountered these network config problems and saw your answer, I thougt it was time to update some packages to get corrections. So I just runned a "yum update" on the nodes and rebooted them. Then I put the engine in global maintenance mode and tried "hosted-engine --upgrade-appliance" on the node hosting it. But it failed returning me this cluster version error. The engine's about windows says "Software Version:4.2.0.2-1.el7.centos" And for the hosts : OS Version:RHEL - 7 - 4.1708.el7.centos OS Description:CentOS Linux 7 (Core) Kernel Version:3.10.0 - 693.17.1.el7.x86_64 KVM Version:2.9.0 - 16.el7_4.13.1 LIBVIRT Version:libvirt-3.2.0-14.el7_4.7 VDSM Version:vdsm-4.20.9.3-1.el7.centos SPICE Version:0.12.8 - 2.el7.1 GlusterFS Version:[N/A] CEPH Version:librbd1-0.94.5-2.el7 Regards Le 08-Feb-2018 09:59:32 +0100, mburman at redhat.com a crit: Not sure i understand from which version you trying to upgrade and what is the exact upgrade flow, if i got it correctly, it is seems that you upgraded the hosts to 4.2, but engine still 4.1? What exactly the upgrade steps, please explain the flow., what have you done after upgrading the hosts? to what version? Cheers) On Wed, Feb 7, 2018 at 3:00 PM, wrote: Hi, Thanks a lot for your answer. I applied some updates at node level, but I forgot to upgrade the engine ! When I try to do so I get a strange error : "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." Here are the installed packets on my nodes : python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 ovirt-imageio-common-1.2.0-1.el7.centos.noarch ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.0-1.el7.centos.noarch ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64 ovirt-host-4.2.0-1.el7.centos.x86_64 ovirt-host-deploy-1.7.0-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch ovirt-vmconsole-host-1.0.4-1.el7.noarch cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.noarch What I am supposed to do ? I see no newer packages available. Regards Le 07-Feb-2018 13:23:43 +0100, mburman at redhat.com a crit: Hi This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - https://bugzilla.redhat.com/show_bug.cgi?id=1528906 The no default route bug was fixed in - https://bugzilla.redhat.com/show_bug.cgi?id=1477589 Thanks, On Wed, Feb 7, 2018 at 1:15 PM, wrote: Hi, I am experiencing a new problem : when I try to modify something in the network setup on the second node (added to the cluster after installing the engine on the other one) using the Engine GUI, I get the following error when validating : must match "^\b((25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" Attribut : ipConfiguration.iPv4Addresses[0].gateway Moreover, on the general status of ther server, I have a "Host has no default route" alert. The ovirtmgmt network has a defined gateway of course, and the storage network has none because it is not required. Both server have the same setup, with different addresses of course :-) I have not been able to find anything useful in the logs. Is this a bug or am I doing something wrong ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From P.Staniforth at leedsbeckett.ac.uk Thu Feb 8 10:12:04 2018 From: P.Staniforth at leedsbeckett.ac.uk (Staniforth, Paul) Date: Thu, 8 Feb 2018 10:12:04 +0000 Subject: [ovirt-users] Maximum time node can be offline. In-Reply-To: References: Message-ID: <1518084724645.71365@leedsbeckett.ac.uk> Hello, you should be able to use the power saving cluster scheduling policy. https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2-beta/html/administration_guide/sect-Scheduling_Policies Regards, Paul S. ________________________________ From: users-bounces at ovirt.org on behalf of Thomas Letherby Sent: 08 February 2018 05:51 To: users at ovirt.org Subject: [ovirt-users] Maximum time node can be offline. Hello all, Is there a maximum length of time an Ovirt Node 4.2 based host can be offline in a cluster before it would have issues when powered back on? The reason I ask is in my lab I currently have a three node cluster that works really well, however a lot of the time I only actually need the resources of one host, so to save power I'd like to keep the other two offline until needed. I can always script them to boot once a week or so if I need to. Thanks, Thomas To view the terms under which this email is distributed, please go to:- http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Thu Feb 8 11:36:10 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Thu, 8 Feb 2018 13:36:10 +0200 Subject: [ovirt-users] oVirt DR: ansible with 4.1, only a subset of storage domain replicated In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 10:34 AM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > On Tue, Feb 6, 2018 at 9:33 PM, Maor Lipchuk wrote: > [cut] > >> What i need is that informations about vms is replicated to the remote > >> site with disk. > >> In an older test i had the issue that disks were replicated to remote > >> site, but vm configuration not! > >> I've found disks in the "Disk" tab of storage domain, but nothing on VM > >> Import. > > > > > > > > Can you reproduce it and attach the logs of the setup before the disaster > > and after the recovery? > > That could happen in case of new created VMs and Templates which were not > > yet updated in the OVF_STORE disk, since the OVF_STORE update process was > > not running yet before the disaster. > > Since the time of a disaster can't be anticipated, gaps like this might > > happen. > > > > I haven't tried the recovery yet using ansible. It was an experiment > of possible procedure to be performed manually and was on 4.0. > I asked about this unexpected behavior and Yaniv returned me that was > due to OVF_STORE not updated and that in 4.1 there is an api call that > updates OVF_STORE on demand. > > I'm creating a new setup today and i'll test again and check if i > still hit the issue. Anyway if the problem persist i think that > engine, for DR purposes, should upgrade the OVF_STORE as soon as > possible when a new vm is created or has disks added. > If engine will update the OVF_STORE on any VM change that could reflect the oVirt performance since it is a heavy operation, although we do have some ideas to change that design so every VM change will only change the VM OVF instead of the whole OVF_STORE disk. > [cut] > >> > >> Ok, but if i keep master storage domain on a non replicate volume, do > >> i require this function? > > > > > > Basically it should also fail on VM/Template registration in oVirt 4.1 > since > > there are also other functionalities like mapping of OVF attributes which > > was added on VM/Templates registeration. > > > > What do you mean? That i could fail to import any VM/Template? In what > case? > If using the fail-over in ovirt-ansible-disaster-recovery, the VM/Template registration process is being done automatically through the ovirt-ansible tasks and it is based on the oVirt 4.2 API. The task which registers the VMs and the Templates is being done there without indicating the target cluster id, since in oVirt 4.2 we already added the cluster name in the VM's/Template's OVF. If your engine is oVirt 4.1 the registration will fail since in oVirt 4.1 the cluster id is mandatory. > > Another question: > > we have 2 DCs in main site, do we require to have also 2 DCs in > recovery site o we can import all the storage domains in a single DC > on recovery site? There could be uuid collisions or similar? > I think it could work, although I suggest that the clusters should be compatible with those configurd in the primary setup otherwise you might encounter problems when you will try to fail back (and also to avoid any collisions of affinity groups/labels or networks). For example if in your primary site you had DC1 with cluster1 and DC2 with cluster2 then your secondary setup should be DC_Secondary with cluster_A and cluster_B. cluster1 will be mapped to cluster_A and cluster2 will be mapped to cluster_B. Another thing that might be troubling is with the master domain attribute in the mapping var file. That attribute indicates which storage domain is master or not. Here is an example how it is being configured in the mapping file: - dr_domain_type: nfs dr_primary_dc_name: Prod dr_primary_name: data_number dr_master_domain: True dr_secondary_dc_name: Recovery dr_secondary_address: ... In your primary site you have two master storage domains, and in your secondary site what will probably happen is that on import of storage domains only one of those two storage domains will be master. Now that I think of it, it might be better to configure the master attribute for each of the setups, like so: dr_primary_master_domain: True dr_secondary_master_domain: False > > Thank you so much for your replies, > > Luca > -- > "E' assurdo impiegare gli uomini di intelligenza eccellente per fare > calcoli che potrebbero essere affidati a chiunque se si usassero delle > macchine" > Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) > > "Internet ? la pi? grande biblioteca del mondo. > Ma il problema ? che i libri sono tutti sparsi sul pavimento" > John Allen Paulos, Matematico (1945-vivente) > > Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , < > lorenzetto.luca at gmail.com> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kosha79 at gmail.com Thu Feb 8 11:58:14 2018 From: kosha79 at gmail.com (Ilya Fedotov) Date: Thu, 8 Feb 2018 14:58:14 +0300 Subject: [ovirt-users] ovn problem - Failed to communicate with the external provider, see log for additional details. In-Reply-To: References: Message-ID: Hello, Georgy Maybe, the problem have the different domain name and name your node name(local domain), and certificate note valid. with br, Ilya 2018-02-05 22:36 GMT+03:00 George Sitov : > Hello! > > I have a problem wiith configure external provider. > > Edit config file - ovirt-provider-ovn.conf, set ssl parameters. > systemctl start ovirt-provider-ovn start without problem. > In external proveder in web gui i set: > Provider URL: https://ovirt.mydomain.com:9696 > Username: admin at internal > Authentication URL: https://ovirt.mydomain.com:35357/v2.0/ > But after i press test button i see error - Failed to communicate with > the external provider, see log for additional details. > > /var/log/ovirt-engine/engine.log: > 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll. > provider.network.openstack.BaseNetworkProviderProxy] (default task-29) > [69fa312e-6e2e-4925-b081-385beba18a6a] Bad Gateway (OpenStack response > error code: 502) > 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.provider. > TestProviderConnectivityCommand] (default task-29) > [69fa312e-6e2e-4925-b081-385beba18a6a] Command 'org.ovirt.engine.core.bll. > provider.TestProviderConnectivityCommand' failed: EngineException: > (Failed with error PROVIDER_FAILURE and code 5050) > > In /var/log/ovirt-provider-ovn.log: > > 2018-02-05 21:33:55,510 Starting new HTTPS connection (1): > ovirt.astrecdata.com > 2018-02-05 21:33:55,516 [SSL: CERTIFICATE_VERIFY_FAILED] certificate > verify failed (_ssl.c:579) > Traceback (most recent call last): > File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line > 126, in _handle_request > method, path_parts, content) > File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", > line 176, in handle_request > return self.call_response_handler(handler, content, parameters) > File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in > call_response_handler > return response_handler(content, parameters) > File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", > line 60, in post_tokens > user_password=user_password) > File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, in > create_token > return auth.core.plugin.create_token(user_at_domain, user_password) > File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", line > 48, in create_token > timeout=self._timeout()) > File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line > 62, in create_token > username, password, engine_url, ca_file, timeout) > File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line > 53, in wrapper > response = func(*args, **kwargs) > File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line > 46, in wrapper > raise BadGateway(e) > BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed > (_ssl.c:579) > > Whan i do wrong ? > Please help. > > ---- > With best regards Georgii. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.leopold at meduniwien.ac.at Thu Feb 8 12:08:53 2018 From: matthias.leopold at meduniwien.ac.at (Matthias Leopold) Date: Thu, 8 Feb 2018 13:08:53 +0100 Subject: [ovirt-users] effectiveness of "discard=unmap" Message-ID: <7daec1b4-dc4a-0527-7c2e-00bf21455ae0@meduniwien.ac.at> Hi, i'm sorry to bother you again with my ignorance of the DISCARD feature for block devices in general. after finding several ways to enable "discard=unmap" for oVirt disks (via standard GUI option for iSCSI disks or via "diskunmap" custom property for Cinder disks) i wanted to check in the guest for the effectiveness of this feature. to my surprise i couldn't find a difference between Linux guests with and without "discard=unmap" enabled in the VM. "lsblk -D" reports the same in both cases and also fstrim/blkdiscard commands appear to work with no difference. Why is this? Do i have to look at the underlying storage to find out what really happens? Shouldn't this be visible in the guest OS? thx matthias From pkotas at redhat.com Thu Feb 8 12:42:39 2018 From: pkotas at redhat.com (Petr Kotas) Date: Thu, 8 Feb 2018 13:42:39 +0100 Subject: [ovirt-users] Issue with 4.2.1 RC and SSL In-Reply-To: References: Message-ID: Hi Stack, have you tried it on other linux distributions? Scientific is not officially supported. My guess based on your log is there are somewhere missing certificates, maybe different path?. You can check the paths by the documentation: https://www.ovirt.org/develop/release-management/features/infra/pki/#vdsm Hope this helps. Petr On Thu, Feb 8, 2018 at 1:13 AM, ~Stack~ wrote: > Greetings, > > I was having a lot of issues with 4.2 and 95% of them are in the change > logs for 4.2.1. Since this is a new build, I just blew everything away > and started from scratch with the RC release. > > The very first thing that I did after the engine-config was to set up my > SSL cert. I followed the directions from here: > https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/ > > Logged in the first time to the web interface and everything worked! Great. > > Install my hosts (also completely fresh installs - Scientific Linux 7 > fully updated) and none would finish the install... > > > I can send the full host debug log if you want, however, I'm pretty sure > that the problem is because of the SSL somewhere. I've cut/pasted the > relevant part. > > Any advice/help, please? > > Thanks! > ~Stack~ > > > 2018-02-07 16:56:21,697-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD > otopi.plugins.ovirt_host_deploy.tune.tuned.Plugin._misc (None) > 2018-02-07 16:56:21,698-0600 DEBUG otopi.context > context._executeMethod:128 Stage misc METHOD > otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id > 2018-02-07 16:56:21,698-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD > otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None) > 2018-02-07 16:56:21,699-0600 DEBUG otopi.transaction > transaction._prepare:61 preparing 'File transaction for '/etc/vdsm/vdsm.id > '' > 2018-02-07 16:56:21,699-0600 DEBUG otopi.filetransaction > filetransaction.prepare:183 file '/etc/vdsm/vdsm.id' missing > 2018-02-07 16:56:21,705-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD > otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None) > 2018-02-07 16:56:21,706-0600 DEBUG otopi.context > context._executeMethod:128 Stage misc METHOD > otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks > 2018-02-07 16:56:21,706-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD > otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None) > 2018-02-07 16:56:21,707-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD > otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None) > 2018-02-07 16:56:21,707-0600 DEBUG otopi.context > context._executeMethod:128 Stage misc METHOD > otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc > 2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD > otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc (None) > 2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### Setting up PKI > 2018-02-07 16:56:21,709-0600 DEBUG > otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:813 execute: > ('/usr/bin/openssl', 'req', '-new', '-newkey', 'rsa:2048', '-nodes', > '-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), executable='None', > cwd='None', env=None > 2018-02-07 16:56:21,756-0600 DEBUG > otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:863 > execute-result: ('/usr/bin/openssl', 'req', '-new', '-newkey', > 'rsa:2048', '-nodes', '-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), rc=0 > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### Please issue VDSM > certificate based on this certificate request > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ***D:MULTI-STRING > VDSM_CERTIFICATE_REQUEST --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND -----BEGIN CERTIFICATE > REQUEST----- > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMZm > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > eYTWbHKkN+GlQnZ8C6fdk++htyFE+IHSzkhTyTSZdM0bPTdvhomTeCwzNlWBWdU+ > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > PrVB7j/1iksSt6RXDQUWlPDPBNfAa6NtZijEaGuxAe0RpI71G5feZmgVRmtIfrkE > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > 5BjhnCMJW46y9Y7dc2TaXzQqeVj0nkWkHt0v6AVdRWP3OHfOCvqoABny1urStvFT > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > TeAhSBVBUWTaNczBrZBpMXhXrSAe/hhLXMF3VfBV1odOOwb7AeccYkGePMxUOg8+ > 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > XMAKdDCn7N0ZC4gSyEAP9mSobvOvNObcfw02NyYdny32/edgPrXKR+ISf4IwVd0d > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > mDonT4W2ROTE/A3M/mkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQCpAKAMv/Vh > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > 0ByC02R3fxtA6b/OZyys+xyIAfAGxo2NSDJDQsw9Gy1QWVtJX5BGsbzuhnNJjhRm > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > 5yx0wrS/k34oEv8Wh+po1fwpI5gG1W9L96Sx+vF/+UXBenJbhEVfir/cOzjmP1Hg > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > TtK5nYnBM7Py5JdnnAPww6jPt6uRypDZqqM8YOct1OEsBr8gPvmQvt5hDGJKqW37 > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > xFbad6ILwYIE0DXAu2h9y20Pl3fy4Kb2LQDjltiaQ2IBiHFRUB/H2DOxq0NpH4z7 > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > wqU/ai7sXWT/Vq4R6jD+c0V0WP4+VgSkgqPvnSYHwqQUbc9Kh7RwRnVyzLupbWdM > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND Pr+MZ2D1jg27 > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND -----END CERTIFICATE REQUEST----- > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND > --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND **%QStart: VDSM_CERTIFICATE_CHAIN > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### Please input VDSM > certificate chain that matches certificate request, top is issuer > 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### > 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ### type > '--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--' in own line to mark end, > '--=451b80dc-996f-ABORT-9e4f-2b29ef6d1141=--' aborts > 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND ***Q:MULTI-STRING > VDSM_CERTIFICATE_CHAIN --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- > --=451b80dc-996f-ABORT-9e4f-2b29ef6d1141=-- > 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine > dialog.__logString:204 DIALOG:SEND **%QEnd: VDSM_CERTIFICATE_CHAIN > 2018-02-07 16:56:22,765-0600 DEBUG otopi.context > context._executeMethod:143 method exception > Traceback (most recent call last): > File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/context.py", line 133, in > _executeMethod > method['method']() > File > "/tmp/ovirt-h7XmTvEqc3/otopi-plugins/ovirt-host-common/vdsm/pki.py", > line 241, in _misc > '\n\nPlease input VDSM certificate chain that ' > File "/tmp/ovirt-h7XmTvEqc3/otopi-plugins/otopi/dialog/machine.py", > line 327, in queryMultiString > v = self._readline() > File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/dialog.py", line 248, in > _readline > raise IOError(_('End of file')) > IOError: End of file > 2018-02-07 16:56:22,766-0600 ERROR otopi.context > context._executeMethod:152 Failed to execute stage 'Misc configuration': > End of file > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkotas at redhat.com Thu Feb 8 12:51:30 2018 From: pkotas at redhat.com (Petr Kotas) Date: Thu, 8 Feb 2018 13:51:30 +0100 Subject: [ovirt-users] Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider In-Reply-To: References: Message-ID: Hi Maoz, it looks like cannot connect due to wrong setup of ssh keys. Which linux are you using? The guide for setting the ssh connection to libvirt is here: https://wiki.libvirt.org/page/SSHSetup May it helps? Petr On Wed, Feb 7, 2018 at 10:53 PM, maoz zadok wrote: > Hello there, > > I'm following https://www.ovirt.org/develop/release-management/features/ > virt/KvmToOvirt/ guide in order to import VMS from Libvirt to oVirt using > ssh. > URL: "qemu+ssh://host1.example.org/system" > > and get the following error: > Failed to communicate with the external provider, see log for additional > details. > > > *oVirt agent log:* > > *- Failed to retrieve VMs information from external server > qemu+ssh://XXX.XXX.XXX.XXX/system* > *- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot recv > data: Host key verification failed.: Connection reset by peer* > > > > *remote host sshd DEBUG log:* > *Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port > 48148 on XXX.XXX.XXX.123 port 22* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0; > client software version OpenSSH_7.4* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat OpenSSH* > compat 0x04000000* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string > SSH-2.0-OpenSSH_7.4* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode for > protocol 2.0* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled > [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74 > [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types: > ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received > [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm: > curve25519-sha256 [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm: > ecdsa-sha2-nistp256 [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher: > chacha20-poly1305 at openssh.com MAC: > compression: none [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher: > chacha20-poly1305 at openssh.com MAC: > compression: none [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64 > dh_need=64 [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64 > dh_need=64 [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting > SSH2_MSG_KEX_ECDH_INIT [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks > [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS > [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147 > port 48148 [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup* > *Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child 110006* > *Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: Set /proc/self/oom_score_adj to > 0* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 newsock > 5 pipe 7 sock 8* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after dupping: 3, > 3* > *Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 port > 48150 on XXX.XXX.XXX.123 port 22* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version 2.0; > client software version OpenSSH_7.4* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat OpenSSH* > compat 0x04000000* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string > SSH-2.0-OpenSSH_7.4* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode for > protocol 2.0* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled > [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74 > [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types: > ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received > [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm: > curve25519-sha256 [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm: > ecdsa-sha2-nistp256 [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher: > chacha20-poly1305 at openssh.com MAC: > compression: none [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher: > chacha20-poly1305 at openssh.com MAC: > compression: none [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64 > dh_need=64 [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64 > dh_need=64 [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting > SSH2_MSG_KEX_ECDH_INIT [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 blocks > [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS > [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: Connection closed by XXX.XXX.XXX.147 > port 48150 [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup* > *Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child 110008* > *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: Set /proc/self/oom_score_adj to > 0* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 newsock > 5 pipe 7 sock 8* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after dupping: 3, > 3* > *Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 port > 48152 on XXX.XXX.XXX.123 port 22* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version 2.0; > client software version OpenSSH_7.4* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat OpenSSH* > compat 0x04000000* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string > SSH-2.0-OpenSSH_7.4* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode for > protocol 2.0* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled > [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74 > [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types: > ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received > [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm: > curve25519-sha256 [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm: > ecdsa-sha2-nistp256 [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher: > chacha20-poly1305 at openssh.com MAC: > compression: none [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher: > chacha20-poly1305 at openssh.com MAC: > compression: none [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64 > dh_need=64 [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64 > dh_need=64 [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting > SSH2_MSG_KEX_ECDH_INIT [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 blocks > [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS > [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: Connection closed by XXX.XXX.XXX.147 > port 48152 [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup* > *Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child 110010* > *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: Set /proc/self/oom_score_adj to > 0* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 newsock > 5 pipe 7 sock 8* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after dupping: 3, > 3* > *Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 port > 48154 on XXX.XXX.XXX.123 port 22* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version 2.0; > client software version OpenSSH_7.4* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat OpenSSH* > compat 0x04000000* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string > SSH-2.0-OpenSSH_7.4* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode for > protocol 2.0* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled > [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74 > [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types: > ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received > [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm: > curve25519-sha256 [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm: > ecdsa-sha2-nistp256 [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher: > chacha20-poly1305 at openssh.com MAC: > compression: none [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher: > chacha20-poly1305 at openssh.com MAC: > compression: none [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64 > dh_need=64 [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64 > dh_need=64 [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting > SSH2_MSG_KEX_ECDH_INIT [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 blocks > [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS > [preauth]* > *Feb 7 16:38:30 XXX sshd[110011]: Connection closed by XXX.XXX.XXX.147 > port 48154 [preauth]* > > > Thank you! > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omachace at redhat.com Thu Feb 8 12:56:01 2018 From: omachace at redhat.com (Ondra Machacek) Date: Thu, 8 Feb 2018 13:56:01 +0100 Subject: [ovirt-users] Engine AAA LDAP startTLS Protocol Issue In-Reply-To: References: Message-ID: On 02/08/2018 11:04 AM, Alan Griffiths wrote: > Hi, > > Trying to configure Engine to authenticate against OpenLDAP and I seem > to be hitting a protocol bug. > > Attempts to test the login during the setup fail with > > 2018-02-07 12:27:37,872Z WARNING Exception: The connection reader was > unable to successfully complete TLS negotiation: > SSLException(message='Received fatal alert: protocol_version', > trace='getSSLException(Alerts.java:208) / > getSSLException(Alerts.java:154) / recvAlert(SSLSocketImpl.java:2033) > / readRecord(SSLSocketImpl.java:1135) / > performInitialHandshake(SSLSocketImpl.java:1385) / > startHandshake(SSLSocketImpl.java:1413) / > startHandshake(SSLSocketImpl.java:1397) / > run(LDAPConnectionReader.java:301)', revision=0) > > Running a packet trace I see that it's trying to negotiate with TLS > 1.0, but my LDAP server only support TLS 1.2. I've sent a fix: https://gerrit.ovirt.org/87327 To workaround it just please add to you profile properties file: pool.default.ssl.startTLSProtocol = TLSv1.2 > > This looks like a regression as it works fine in 4.0. > > I see the issue in both 4.1 and 4.2 > > 4.1.9.1 > 4.2.0.2 > > Should I submit a bug? > > Thanks, > > Alan > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From ykaul at redhat.com Thu Feb 8 12:59:53 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Thu, 8 Feb 2018 14:59:53 +0200 Subject: [ovirt-users] qcow2 images corruption In-Reply-To: References: Message-ID: On Feb 7, 2018 7:08 PM, "Nicolas Ecarnot" wrote: Hello, TL; DR : qcow2 images keep getting corrupted. Any workaround? Long version: This discussion has already been launched by me on the oVirt and on qemu-block mailing list, under similar circumstances but I learned further things since months and here are some informations : - We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using CentOS 7.{2,3} hosts - Hosts : - CentOS 7.2 1511 : - Kernel = 3.10.0 327 - KVM : 2.3.0-31 - libvirt : 1.2.17 - vdsm : 4.17.32-1 - CentOS 7.3 1611 : - Kernel 3.10.0 514 - KVM : 2.3.0-31 - libvirt 2.0.0-10 - vdsm : 4.17.32-1 All are somewhat old releases. I suggest upgrading to the latest RHEL and qemu-kvm bits. Later on, upgrade oVirt. Y. - Our storage is 2 Equallogic SANs connected via iSCSI on a dedicated network - Depends on weeks, but all in all, there are around 32 hosts, 8 storage domains and for various reasons, very few VMs (less than 200). - One peculiar point is that most of our VMs are provided an additional dedicated network interface that is iSCSI-connected to some volumes of our SAN - these volumes not being part of the oVirt setup. That could lead to a lot of additional iSCSI traffic. >From times to times, a random VM appears paused by oVirt. Digging into the oVirt engine logs, then into the host vdsm logs, it appears that the host considers the qcow2 image as corrupted. Along what I consider as a conservative behavior, vdsm stops any interaction with this image and marks it as paused. Any try to unpause it leads to the same conservative pause. After having found (https://access.redhat.com/solutions/1173623) the right logical volume hosting the qcow2 image, I can run qemu-img check on it. - On 80% of my VMs, I find no errors. - On 15% of them, I find Leaked cluster errors that I can correct using "qemu-img check -r all" - On 5% of them, I find Leaked clusters errors and further fatal errors, which can not be corrected with qemu-img. In rare cases, qemu-img can correct them, but destroys large parts of the image (becomes unusable), and on other cases it can not correct them at all. Months ago, I already sent a similar message but the error message was about No space left on device (https://www.mail-archive.com/ qemu-block at gnu.org/msg00110.html). This time, I don't have this message about space, but only corruption. I kept reading and found a similar discussion in the Proxmox group : https://lists.ovirt.org/pipermail/users/2018-February/086750.html https://forum.proxmox.com/threads/qcow2-corruption-after- snapshot-or-heavy-disk-i-o.32865/page-2 What I read similar to my case is : - usage of qcow2 - heavy disk I/O - using the virtio-blk driver In the proxmox thread, they tend to say that using virtio-scsi is the solution. Having asked this question to oVirt experts ( https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's not clear the driver is to blame. I agree with the answer Yaniv Kaul gave to me, saying I have to properly report the issue, so I'm longing to know which peculiar information I can give you now. As you can imagine, all this setup is in production, and for most of the VMs, I can not "play" with them. Moreover, we launched a campaign of nightly stopping every VM, qemu-img check them one by one, then boot. So it might take some time before I find another corrupted image. (which I'll preciously store for debug) Other informations : We very rarely do snapshots, but I'm close to imagine that automated migrations of VMs could trigger similar behaviors on qcow2 images. Last point about the versions we use : yes that's old, yes we're planning to upgrade, but we don't know when. Regards, -- Nicolas ECARNOT _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From apgriffiths79 at gmail.com Thu Feb 8 13:30:46 2018 From: apgriffiths79 at gmail.com (Alan Griffiths) Date: Thu, 8 Feb 2018 13:30:46 +0000 Subject: [ovirt-users] Engine AAA LDAP startTLS Protocol Issue In-Reply-To: References: Message-ID: That works. Thanks. On 8 February 2018 at 12:56, Ondra Machacek wrote: > On 02/08/2018 11:04 AM, Alan Griffiths wrote: >> >> Hi, >> >> Trying to configure Engine to authenticate against OpenLDAP and I seem >> to be hitting a protocol bug. >> >> Attempts to test the login during the setup fail with >> >> 2018-02-07 12:27:37,872Z WARNING Exception: The connection reader was >> unable to successfully complete TLS negotiation: >> SSLException(message='Received fatal alert: protocol_version', >> trace='getSSLException(Alerts.java:208) / >> getSSLException(Alerts.java:154) / recvAlert(SSLSocketImpl.java:2033) >> / readRecord(SSLSocketImpl.java:1135) / >> performInitialHandshake(SSLSocketImpl.java:1385) / >> startHandshake(SSLSocketImpl.java:1413) / >> startHandshake(SSLSocketImpl.java:1397) / >> run(LDAPConnectionReader.java:301)', revision=0) >> >> Running a packet trace I see that it's trying to negotiate with TLS >> 1.0, but my LDAP server only support TLS 1.2. > > > I've sent a fix: > > https://gerrit.ovirt.org/87327 > > To workaround it just please add to you profile properties file: > > pool.default.ssl.startTLSProtocol = TLSv1.2 > >> >> This looks like a regression as it works fine in 4.0. >> >> I see the issue in both 4.1 and 4.2 >> >> 4.1.9.1 >> 4.2.0.2 >> >> Should I submit a bug? >> >> Thanks, >> >> Alan >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > From didi at redhat.com Thu Feb 8 13:34:18 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 8 Feb 2018 15:34:18 +0200 Subject: [ovirt-users] Issue with 4.2.1 RC and SSL In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 2:42 PM, Petr Kotas wrote: > Hi Stack, > > have you tried it on other linux distributions? Scientific is not officially > supported. > > My guess based on your log is there are somewhere missing certificates, > maybe different path?. > You can check the paths by the documentation: > https://www.ovirt.org/develop/release-management/features/infra/pki/#vdsm > > Hope this helps. > > Petr > > > > On Thu, Feb 8, 2018 at 1:13 AM, ~Stack~ wrote: >> >> Greetings, >> >> I was having a lot of issues with 4.2 and 95% of them are in the change >> logs for 4.2.1. Since this is a new build, I just blew everything away >> and started from scratch with the RC release. >> >> The very first thing that I did after the engine-config was to set up my >> SSL cert. I followed the directions from here: >> https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/ >> >> Logged in the first time to the web interface and everything worked! >> Great. >> >> Install my hosts (also completely fresh installs - Scientific Linux 7 >> fully updated) and none would finish the install... >> >> >> I can send the full host debug log if you want, however, I'm pretty sure >> that the problem is because of the SSL somewhere. I've cut/pasted the >> relevant part. Please check/share also engine.log of the relevant time frame. Thanks. >> >> Any advice/help, please? >> >> Thanks! >> ~Stack~ >> >> >> 2018-02-07 16:56:21,697-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD >> otopi.plugins.ovirt_host_deploy.tune.tuned.Plugin._misc (None) >> 2018-02-07 16:56:21,698-0600 DEBUG otopi.context >> context._executeMethod:128 Stage misc METHOD >> otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id >> 2018-02-07 16:56:21,698-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD >> otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None) >> 2018-02-07 16:56:21,699-0600 DEBUG otopi.transaction >> transaction._prepare:61 preparing 'File transaction for >> '/etc/vdsm/vdsm.id'' >> 2018-02-07 16:56:21,699-0600 DEBUG otopi.filetransaction >> filetransaction.prepare:183 file '/etc/vdsm/vdsm.id' missing >> 2018-02-07 16:56:21,705-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD >> otopi.plugins.ovirt_host_deploy.vdsm.vdsmid.Plugin._store_id (None) >> 2018-02-07 16:56:21,706-0600 DEBUG otopi.context >> context._executeMethod:128 Stage misc METHOD >> otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks >> 2018-02-07 16:56:21,706-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD >> otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None) >> 2018-02-07 16:56:21,707-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND **%EventEnd STAGE misc METHOD >> otopi.plugins.ovirt_host_deploy.vdsmhooks.hooks.Plugin._hooks (None) >> 2018-02-07 16:56:21,707-0600 DEBUG otopi.context >> context._executeMethod:128 Stage misc METHOD >> otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc >> 2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND **%EventStart STAGE misc METHOD >> otopi.plugins.ovirt_host_common.vdsm.pki.Plugin._misc (None) >> 2018-02-07 16:56:21,708-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### Setting up PKI >> 2018-02-07 16:56:21,709-0600 DEBUG >> otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:813 execute: >> ('/usr/bin/openssl', 'req', '-new', '-newkey', 'rsa:2048', '-nodes', >> '-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), executable='None', >> cwd='None', env=None >> 2018-02-07 16:56:21,756-0600 DEBUG >> otopi.plugins.ovirt_host_common.vdsm.pki plugin.executeRaw:863 >> execute-result: ('/usr/bin/openssl', 'req', '-new', '-newkey', >> 'rsa:2048', '-nodes', '-subj', '/', '-keyout', '/tmp/tmpQkrIuV.tmp'), rc=0 >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### Please issue VDSM >> certificate based on this certificate request >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ***D:MULTI-STRING >> VDSM_CERTIFICATE_REQUEST --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND -----BEGIN CERTIFICATE >> REQUEST----- >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> MIICRTCCAS0CAQAwADCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAMZm >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> eYTWbHKkN+GlQnZ8C6fdk++htyFE+IHSzkhTyTSZdM0bPTdvhomTeCwzNlWBWdU+ >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> PrVB7j/1iksSt6RXDQUWlPDPBNfAa6NtZijEaGuxAe0RpI71G5feZmgVRmtIfrkE >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> 5BjhnCMJW46y9Y7dc2TaXzQqeVj0nkWkHt0v6AVdRWP3OHfOCvqoABny1urStvFT >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> TeAhSBVBUWTaNczBrZBpMXhXrSAe/hhLXMF3VfBV1odOOwb7AeccYkGePMxUOg8+ >> 2018-02-07 16:56:21,757-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> XMAKdDCn7N0ZC4gSyEAP9mSobvOvNObcfw02NyYdny32/edgPrXKR+ISf4IwVd0d >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> mDonT4W2ROTE/A3M/mkCAwEAAaAAMA0GCSqGSIb3DQEBCwUAA4IBAQCpAKAMv/Vh >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> 0ByC02R3fxtA6b/OZyys+xyIAfAGxo2NSDJDQsw9Gy1QWVtJX5BGsbzuhnNJjhRm >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> 5yx0wrS/k34oEv8Wh+po1fwpI5gG1W9L96Sx+vF/+UXBenJbhEVfir/cOzjmP1Hg >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> TtK5nYnBM7Py5JdnnAPww6jPt6uRypDZqqM8YOct1OEsBr8gPvmQvt5hDGJKqW37 >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> xFbad6ILwYIE0DXAu2h9y20Pl3fy4Kb2LQDjltiaQ2IBiHFRUB/H2DOxq0NpH4z7 >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> wqU/ai7sXWT/Vq4R6jD+c0V0WP4+VgSkgqPvnSYHwqQUbc9Kh7RwRnVyzLupbWdM >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND Pr+MZ2D1jg27 >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND -----END CERTIFICATE REQUEST----- >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND >> --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND **%QStart: VDSM_CERTIFICATE_CHAIN >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### Please input VDSM >> certificate chain that matches certificate request, top is issuer >> 2018-02-07 16:56:21,758-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### >> 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ### type >> '--=451b80dc-996f-432e-9e4f-2b29ef6d1141=--' in own line to mark end, >> '--=451b80dc-996f-ABORT-9e4f-2b29ef6d1141=--' aborts >> 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND ***Q:MULTI-STRING >> VDSM_CERTIFICATE_CHAIN --=451b80dc-996f-432e-9e4f-2b29ef6d1141=-- >> --=451b80dc-996f-ABORT-9e4f-2b29ef6d1141=-- >> 2018-02-07 16:56:21,759-0600 DEBUG otopi.plugins.otopi.dialog.machine >> dialog.__logString:204 DIALOG:SEND **%QEnd: VDSM_CERTIFICATE_CHAIN >> 2018-02-07 16:56:22,765-0600 DEBUG otopi.context >> context._executeMethod:143 method exception >> Traceback (most recent call last): >> File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/context.py", line 133, in >> _executeMethod >> method['method']() >> File >> "/tmp/ovirt-h7XmTvEqc3/otopi-plugins/ovirt-host-common/vdsm/pki.py", >> line 241, in _misc >> '\n\nPlease input VDSM certificate chain that ' >> File "/tmp/ovirt-h7XmTvEqc3/otopi-plugins/otopi/dialog/machine.py", >> line 327, in queryMultiString >> v = self._readline() >> File "/tmp/ovirt-h7XmTvEqc3/pythonlib/otopi/dialog.py", line 248, in >> _readline >> raise IOError(_('End of file')) >> IOError: End of file >> 2018-02-07 16:56:22,766-0600 ERROR otopi.context >> context._executeMethod:152 Failed to execute stage 'Misc configuration': >> End of file >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Didi From nicolas at ecarnot.net Thu Feb 8 13:38:54 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Thu, 8 Feb 2018 14:38:54 +0100 Subject: [ovirt-users] qcow2 images corruption In-Reply-To: References: Message-ID: <1e09b91d-1bed-7354-7b7d-8b0dfdac1c4c@ecarnot.net> Le 08/02/2018 ? 13:59, Yaniv Kaul a ?crit?: > > > On Feb 7, 2018 7:08 PM, "Nicolas Ecarnot" > wrote: > > Hello, > > TL; DR : qcow2 images keep getting corrupted. Any workaround? > > Long version: > This discussion has already been launched by me on the oVirt and > on qemu-block mailing list, under similar circumstances but I > learned further things since months and here are some informations : > > - We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using > CentOS 7.{2,3} hosts > - Hosts : > ? - CentOS 7.2 1511 : > ? ? - Kernel = 3.10.0 327 > ? ? - KVM : 2.3.0-31 > ? ? - libvirt : 1.2.17 > ? ? - vdsm : 4.17.32-1 > ? - CentOS 7.3 1611 : > ? ? - Kernel 3.10.0 514 > ? ? - KVM : 2.3.0-31 > ? ? - libvirt 2.0.0-10 > ? ? - vdsm : 4.17.32-1 > > > All are somewhat old releases. I suggest upgrading to the latest RHEL > and qemu-kvm bits. > > Later on, upgrade oVirt. > Y. Hello Yaniv, We could discuss for hours about the fact that CentOS 7.3 was released in January 2017, thus not that old. And also discuss for hours explaining the gap between developers' will to push their freshest releases and the curb we - industry users - put on adopting such new versions. In my case, the virtualization infrastructure is just one of the +30 domains I have to master everyday, and the more stable the better. In the setup described previously, the qemu qcow2 images were correct, then not. We did not change anything. We have to find a workaround and we need your expertise. Not understanding the cause of the corruption threatens us to the same situation in oVirt 4.2. -- Nicolas Ecarnot -------------- next part -------------- An HTML attachment was scrubbed... URL: From maozza at gmail.com Thu Feb 8 14:09:43 2018 From: maozza at gmail.com (maoz zadok) Date: Thu, 8 Feb 2018 16:09:43 +0200 Subject: [ovirt-users] Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider In-Reply-To: References: Message-ID: Using the command line on the engine machine (as root) works fine. I don't use ssh key from the agent GUI but the authentication section (with root user and password), I think that it's a bug, I manage to migrate with TCP but I just want to let you know. is it possible to use ssh-key from the agent GUI? how can I get the key? On Thu, Feb 8, 2018 at 2:51 PM, Petr Kotas wrote: > Hi Maoz, > > it looks like cannot connect due to wrong setup of ssh keys. Which linux > are you using? > The guide for setting the ssh connection to libvirt is here: > https://wiki.libvirt.org/page/SSHSetup > > May it helps? > > Petr > > On Wed, Feb 7, 2018 at 10:53 PM, maoz zadok wrote: > >> Hello there, >> >> I'm following https://www.ovirt.org/develop/ >> release-management/features/virt/KvmToOvirt/ guide in order to import >> VMS from Libvirt to oVirt using ssh. >> URL: "qemu+ssh://host1.example.org/system" >> >> and get the following error: >> Failed to communicate with the external provider, see log for additional >> details. >> >> >> *oVirt agent log:* >> >> *- Failed to retrieve VMs information from external server >> qemu+ssh://XXX.XXX.XXX.XXX/system* >> *- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot >> recv data: Host key verification failed.: Connection reset by peer* >> >> >> >> *remote host sshd DEBUG log:* >> *Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port >> 48148 on XXX.XXX.XXX.123 port 22* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0; >> client software version OpenSSH_7.4* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat >> OpenSSH* compat 0x04000000* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string >> SSH-2.0-OpenSSH_7.4* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode >> for protocol 2.0* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74 >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types: >> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm: >> curve25519-sha256 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm: >> ecdsa-sha2-nistp256 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher: >> chacha20-poly1305 at openssh.com MAC: >> compression: none [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher: >> chacha20-poly1305 at openssh.com MAC: >> compression: none [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64 >> dh_need=64 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 need=64 >> dh_need=64 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting >> SSH2_MSG_KEX_ECDH_INIT [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147 >> port 48148 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup* >> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child 110006* >> *Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Set /proc/self/oom_score_adj >> to 0* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 newsock >> 5 pipe 7 sock 8* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after dupping: >> 3, 3* >> *Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 port >> 48150 on XXX.XXX.XXX.123 port 22* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version 2.0; >> client software version OpenSSH_7.4* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat >> OpenSSH* compat 0x04000000* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string >> SSH-2.0-OpenSSH_7.4* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode >> for protocol 2.0* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74 >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types: >> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm: >> curve25519-sha256 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm: >> ecdsa-sha2-nistp256 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher: >> chacha20-poly1305 at openssh.com MAC: >> compression: none [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher: >> chacha20-poly1305 at openssh.com MAC: >> compression: none [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64 >> dh_need=64 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 need=64 >> dh_need=64 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting >> SSH2_MSG_KEX_ECDH_INIT [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 blocks >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS >> [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: Connection closed by XXX.XXX.XXX.147 >> port 48150 [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup* >> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child 110008* >> *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Set /proc/self/oom_score_adj >> to 0* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 newsock >> 5 pipe 7 sock 8* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after dupping: >> 3, 3* >> *Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 port >> 48152 on XXX.XXX.XXX.123 port 22* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version 2.0; >> client software version OpenSSH_7.4* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat >> OpenSSH* compat 0x04000000* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string >> SSH-2.0-OpenSSH_7.4* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode >> for protocol 2.0* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74 >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types: >> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm: >> curve25519-sha256 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm: >> ecdsa-sha2-nistp256 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher: >> chacha20-poly1305 at openssh.com MAC: >> compression: none [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher: >> chacha20-poly1305 at openssh.com MAC: >> compression: none [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64 >> dh_need=64 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 need=64 >> dh_need=64 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting >> SSH2_MSG_KEX_ECDH_INIT [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 blocks >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: Connection closed by XXX.XXX.XXX.147 >> port 48152 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup* >> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child 110010* >> *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Set /proc/self/oom_score_adj >> to 0* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 newsock >> 5 pipe 7 sock 8* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after dupping: >> 3, 3* >> *Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 port >> 48154 on XXX.XXX.XXX.123 port 22* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version 2.0; >> client software version OpenSSH_7.4* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat >> OpenSSH* compat 0x04000000* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string >> SSH-2.0-OpenSSH_7.4* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode >> for protocol 2.0* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74 >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types: >> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm: >> curve25519-sha256 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm: >> ecdsa-sha2-nistp256 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher: >> chacha20-poly1305 at openssh.com MAC: >> compression: none [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher: >> chacha20-poly1305 at openssh.com MAC: >> compression: none [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64 >> dh_need=64 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 need=64 >> dh_need=64 [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting >> SSH2_MSG_KEX_ECDH_INIT [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 blocks >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS >> [preauth]* >> *Feb 7 16:38:30 XXX sshd[110011]: Connection closed by XXX.XXX.XXX.147 >> port 48154 [preauth]* >> >> >> Thank you! >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pkotas at redhat.com Thu Feb 8 14:12:16 2018 From: pkotas at redhat.com (Petr Kotas) Date: Thu, 8 Feb 2018 15:12:16 +0100 Subject: [ovirt-users] Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider In-Reply-To: References: Message-ID: You can generate one :). There are different guides for different platforms. The link I sent is the good start on where to put the keys and how to set it up. Petr On Thu, Feb 8, 2018 at 3:09 PM, maoz zadok wrote: > Using the command line on the engine machine (as root) works fine. I > don't use ssh key from the agent GUI but the authentication section (with > root user and password), > I think that it's a bug, I manage to migrate with TCP but I just want to > let you know. > > is it possible to use ssh-key from the agent GUI? how can I get the key? > > On Thu, Feb 8, 2018 at 2:51 PM, Petr Kotas wrote: > >> Hi Maoz, >> >> it looks like cannot connect due to wrong setup of ssh keys. Which linux >> are you using? >> The guide for setting the ssh connection to libvirt is here: >> https://wiki.libvirt.org/page/SSHSetup >> >> May it helps? >> >> Petr >> >> On Wed, Feb 7, 2018 at 10:53 PM, maoz zadok wrote: >> >>> Hello there, >>> >>> I'm following https://www.ovirt.org/develop/ >>> release-management/features/virt/KvmToOvirt/ guide in order to import >>> VMS from Libvirt to oVirt using ssh. >>> URL: "qemu+ssh://host1.example.org/system" >>> >>> and get the following error: >>> Failed to communicate with the external provider, see log for additional >>> details. >>> >>> >>> *oVirt agent log:* >>> >>> *- Failed to retrieve VMs information from external server >>> qemu+ssh://XXX.XXX.XXX.XXX/system* >>> *- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot >>> recv data: Host key verification failed.: Connection reset by peer* >>> >>> >>> >>> *remote host sshd DEBUG log:* >>> *Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port >>> 48148 on XXX.XXX.XXX.123 port 22* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0; >>> client software version OpenSSH_7.4* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat >>> OpenSSH* compat 0x04000000* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string >>> SSH-2.0-OpenSSH_7.4* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode >>> for protocol 2.0* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74 >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types: >>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm: >>> curve25519-sha256 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm: >>> ecdsa-sha2-nistp256 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher: >>> chacha20-poly1305 at openssh.com MAC: >>> compression: none [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher: >>> chacha20-poly1305 at openssh.com MAC: >>> compression: none [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 >>> need=64 dh_need=64 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 >>> need=64 dh_need=64 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting >>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147 >>> port 48148 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup* >>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child 110006* >>> *Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Set /proc/self/oom_score_adj >>> to 0* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 >>> newsock 5 pipe 7 sock 8* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after dupping: >>> 3, 3* >>> *Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 port >>> 48150 on XXX.XXX.XXX.123 port 22* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version 2.0; >>> client software version OpenSSH_7.4* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat >>> OpenSSH* compat 0x04000000* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string >>> SSH-2.0-OpenSSH_7.4* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode >>> for protocol 2.0* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74 >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types: >>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm: >>> curve25519-sha256 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm: >>> ecdsa-sha2-nistp256 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher: >>> chacha20-poly1305 at openssh.com MAC: >>> compression: none [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher: >>> chacha20-poly1305 at openssh.com MAC: >>> compression: none [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 >>> need=64 dh_need=64 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 >>> need=64 dh_need=64 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting >>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 blocks >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS >>> [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: Connection closed by XXX.XXX.XXX.147 >>> port 48150 [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup* >>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child 110008* >>> *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Set /proc/self/oom_score_adj >>> to 0* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 >>> newsock 5 pipe 7 sock 8* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after dupping: >>> 3, 3* >>> *Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 port >>> 48152 on XXX.XXX.XXX.123 port 22* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version 2.0; >>> client software version OpenSSH_7.4* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat >>> OpenSSH* compat 0x04000000* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string >>> SSH-2.0-OpenSSH_7.4* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode >>> for protocol 2.0* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74 >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types: >>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm: >>> curve25519-sha256 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm: >>> ecdsa-sha2-nistp256 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher: >>> chacha20-poly1305 at openssh.com MAC: >>> compression: none [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher: >>> chacha20-poly1305 at openssh.com MAC: >>> compression: none [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 >>> need=64 dh_need=64 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 >>> need=64 dh_need=64 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting >>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 blocks >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: Connection closed by XXX.XXX.XXX.147 >>> port 48152 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup* >>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child 110010* >>> *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Set /proc/self/oom_score_adj >>> to 0* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 >>> newsock 5 pipe 7 sock 8* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after dupping: >>> 3, 3* >>> *Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 port >>> 48154 on XXX.XXX.XXX.123 port 22* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version 2.0; >>> client software version OpenSSH_7.4* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat >>> OpenSSH* compat 0x04000000* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string >>> SSH-2.0-OpenSSH_7.4* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode >>> for protocol 2.0* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74 >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types: >>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm: >>> curve25519-sha256 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm: >>> ecdsa-sha2-nistp256 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher: >>> chacha20-poly1305 at openssh.com MAC: >>> compression: none [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher: >>> chacha20-poly1305 at openssh.com MAC: >>> compression: none [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 >>> need=64 dh_need=64 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 >>> need=64 dh_need=64 [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting >>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 blocks >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS >>> [preauth]* >>> *Feb 7 16:38:30 XXX sshd[110011]: Connection closed by XXX.XXX.XXX.147 >>> port 48154 [preauth]* >>> >>> >>> Thank you! >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmirecki at redhat.com Thu Feb 8 14:44:48 2018 From: mmirecki at redhat.com (Marcin Mirecki) Date: Thu, 8 Feb 2018 15:44:48 +0100 Subject: [ovirt-users] ovn problem - Failed to communicate with the external provider, see log for additional details. In-Reply-To: References: Message-ID: Hello George, Probably your engine and provider certs do not match. The engine pki should be in: /etc/pki/ovirt-engine/certs/ The provider keys are defined in the SSL section of the config file (/etc/ovirt-provider-ovn/conf.d/...): [SSL] https-enabled=true ssl-key-file=... ssl-cert-file=... ssl-cacert-file=... You can compare the keys/certs using openssl. Was the provider created using egine-setup? For testing purposes you can change the "https-enabled" to false and try connecting using http. Thanks, Marcin On Thu, Feb 8, 2018 at 12:58 PM, Ilya Fedotov wrote: > Hello, Georgy > > Maybe, the problem have the different domain name and name your node > name(local domain), and certificate note valid. > > > > with br, Ilya > > 2018-02-05 22:36 GMT+03:00 George Sitov : > >> Hello! >> >> I have a problem wiith configure external provider. >> >> Edit config file - ovirt-provider-ovn.conf, set ssl parameters. >> systemctl start ovirt-provider-ovn start without problem. >> In external proveder in web gui i set: >> Provider URL: https://ovirt.mydomain.com:9696 >> Username: admin at internal >> Authentication URL: https://ovirt.mydomain.com:35357/v2.0/ >> But after i press test button i see error - Failed to communicate with >> the external provider, see log for additional details. >> >> /var/log/ovirt-engine/engine.log: >> 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.pro >> vider.network.openstack.BaseNetworkProviderProxy] (default task-29) >> [69fa312e-6e2e-4925-b081-385beba18a6a] Bad Gateway (OpenStack response >> error code: 502) >> 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.pro >> vider.TestProviderConnectivityCommand] (default task-29) >> [69fa312e-6e2e-4925-b081-385beba18a6a] Command ' >> org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand' >> failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050) >> >> In /var/log/ovirt-provider-ovn.log: >> >> 2018-02-05 21:33:55,510 Starting new HTTPS connection (1): >> ovirt.astrecdata.com >> 2018-02-05 21:33:55,516 [SSL: CERTIFICATE_VERIFY_FAILED] certificate >> verify failed (_ssl.c:579) >> Traceback (most recent call last): >> File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line >> 126, in _handle_request >> method, path_parts, content) >> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", >> line 176, in handle_request >> return self.call_response_handler(handler, content, parameters) >> File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in >> call_response_handler >> return response_handler(content, parameters) >> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", >> line 60, in post_tokens >> user_password=user_password) >> File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, >> in create_token >> return auth.core.plugin.create_token(user_at_domain, user_password) >> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", >> line 48, in create_token >> timeout=self._timeout()) >> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line >> 62, in create_token >> username, password, engine_url, ca_file, timeout) >> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line >> 53, in wrapper >> response = func(*args, **kwargs) >> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line >> 46, in wrapper >> raise BadGateway(e) >> BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed >> (_ssl.c:579) >> >> Whan i do wrong ? >> Please help. >> >> ---- >> With best regards Georgii. >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.rothenstein at bodden-kliniken.de Thu Feb 8 15:00:45 2018 From: f.rothenstein at bodden-kliniken.de (Frank Rothenstein) Date: Thu, 08 Feb 2018 16:00:45 +0100 Subject: [ovirt-users] vdsmd fails after upgrade 4.1 -> 4.2 In-Reply-To: References: <1517838592.1716.15.camel@bodden-kliniken.de> Message-ID: <1518102045.1716.54.camel@bodden-kliniken.de> Thanks Thomas, it seems you were right. I followed the instructions to enable hugepages via kernel command line and after reboot vdsmd starts correctly. (I went back to 4.1.9 in between, added the kernel command line and upgraded to 4.2) The docs/release notes should mention it - or did I miss it? Am Dienstag, den 06.02.2018, 17:17 -0800 schrieb Thomas Davis: > sorry, make that: > > hugeadm --pool-list > Size Minimum Current Maximum Default > 2097152 1024 1024 1024 * > 1073741824 4 4 4 > > > On Tue, Feb 6, 2018 at 5:16 PM, Thomas Davis wrote: > > I found that you now need hugepage1g support. The error messages > > are wrong - it's not truly a libvirt problem, it's hugepages1g are > > missing for libvirt. > > > > add something like: > > > > default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M > > hugepages=1024 to the kernel command line. > > > > You can also do a 'yum install libhugetlbfs-utils', then do: > > > > hugeadm --list > > Mount Point Options > > /dev/hugepages rw,seclabel,relatime > > /dev/hugepages1G rw,seclabel,relatime,pagesize=1G > > > > if you do not see the /dev/hugepages1G listed, then vdsmd/libvirt > > will not start. > > > > > > > > > > > > > > On Mon, Feb 5, 2018 at 5:49 AM, Frank Rothenstein > dden-kliniken.de> wrote: > > > Hi, > > > > > > I'm currently stuck - after upgrading 4.1 to 4.2 I cannot start > > > the > > > host-processes. > > > systemctl start vdsmd fails with following lines in journalctl: > > > > > > > > > > > > Feb 05 14:40:15 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: vdsm: Running wait_for_network > > > Feb 05 14:40:15 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: vdsm: Running run_init_hooks > > > Feb 05 14:40:15 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: vdsm: Running check_is_configured > > > Feb 05 14:40:15 glusternode1.bodden-kliniken.net > > > sasldblistusers2[10440]: DIGEST-MD5 common mech free > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: Error: > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: One of the modules is not configured > > > to > > > work with VDSM. > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: To configure the module use the > > > following: > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: 'vdsm-tool configure [--module > > > module- > > > name]'. > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: If all modules are not configured > > > try to > > > use: > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: 'vdsm-tool configure --force' > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: (The force flag will stop the > > > module's > > > service and start it > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: afterwards automatically to load the > > > new > > > configuration.) > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: abrt is already configured for vdsm > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: lvm is configured for vdsm > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: libvirt is not configured for vdsm > > > yet > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: Current revision of multipath.conf > > > detected, preserving > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: Modules libvirt are not configured > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net > > > vdsmd_init_common.sh[10414]: vdsm: stopped during execute > > > check_is_configured task (task returned with error code 1). > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net systemd[1]: > > > vdsmd.service: control process exited, code=exited status=1 > > > Feb 05 14:40:16 glusternode1.bodden-kliniken.net systemd[1]: > > > Failed to > > > start Virtual Desktop Server Manager. > > > -- Subject: Unit vdsmd.service has failed > > > -- Defined-By: systemd > > > -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd > > > -devel > > > -- > > > -- Unit vdsmd.service has failed. Frank Rothenstein? Systemadministrator Fon: +49 3821 700 125 Fax:?+49 3821 700 190Internet:?www.bodden-kliniken.de E-Mail: f.rothenstein at bodden-kliniken.de _____________________________________________ BODDEN-KLINIKEN Ribnitz-Damgarten GmbH Sandhufe 2 18311 Ribnitz-Damgarten Telefon: 03821-700-0 Telefax: 03821-700-240 E-Mail: info at bodden-kliniken.de Internet: http://www.bodden-kliniken.de Sitz: Ribnitz-Damgarten, Amtsgericht: Stralsund, HRB 2919, Steuer-Nr.: 079/133/40188 Aufsichtsratsvorsitzende: Carmen Schr?ter, Gesch?ftsf?hrer: Dr. Falko Milski, MBA Der Inhalt dieser E-Mail ist ausschlie?lich f?r den bezeichneten Adressaten bestimmt. Wenn Sie nicht der vorgesehene Adressat dieser E-Mail oder dessen Vertreter sein sollten, beachten Sie bitte, dass jede Form der Ver?ffentlichung, Vervielf?ltigung oder Weitergabe des Inhalts dieser E-Mail unzul?ssig ist. Wir bitten Sie, sofort den Absender zu informieren und die E-Mail zu l?schen. ? ? ? ? BODDEN-KLINIKEN Ribnitz-Damgarten GmbH 2017 *** Virenfrei durch Kerio Mail Server und SOPHOS Antivirus *** -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 547f8827.f9d4cce7.png Type: image/png Size: 18036 bytes Desc: not available URL: From donny at fortnebula.com Thu Feb 8 15:18:14 2018 From: donny at fortnebula.com (Donny Davis) Date: Thu, 8 Feb 2018 10:18:14 -0500 Subject: [ovirt-users] Cannot Remove Disk Message-ID: Ovirt 4.2 has been humming away quite nicely for me in the last few months, and now I am hitting an issue when try to touch any api call that has to do with a specific disk. This disk resides on a hyperconverged DC, and none of the other disks seem to be affected. Here is the error thrown. 2018-02-08 10:13:20,005-05 ERROR [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default task-22) [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during ValidateFailure.: org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: Quota 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool 5a497956-0380-021e-0025-00000000035e Any ideas what can be done to fix this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Thu Feb 8 15:20:50 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 8 Feb 2018 16:20:50 +0100 Subject: [ovirt-users] Info about exporting from vSphere Message-ID: Hello, I have this ind of situation. Source env: It is vSphere 6.5 (both vCenter Server appliance and ESXi hosts) where I have an admin account to connect to, but currently only to vCenter and not to the ESXi hosts The VM to be migrated is Windows 2008 R2 SP1 with virtual hw version 8 (ESXi 5.0 and later) and has one boot disk 35Gb and one data disk 250Gb. The SCSI controller is LSI logic sas and network vmxnet3 It has no snapshots at the moment I see in my oVirt 4.1.9 that I can import from: 1) VMware 2) VMware Virtual Appliance and found also related documentations in RHEV 4.1 Virtual Machine Management pdf Some doubts: - what is the best between the 2 methods if I can chose? Their Pros&Cons? - Does 1) imply that I also need the ESXi account? Currently my windows domain account that gives me access to vcenter doesn't work connecting to ESXi hosts - also it seems that 1) is more intrusive, while for 2) I only need to put the ova file into some nfs share... Thanks in advance, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Thu Feb 8 15:28:34 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Thu, 8 Feb 2018 16:28:34 +0100 Subject: [ovirt-users] Info about exporting from vSphere In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 4:20 PM, Gianluca Cecchi wrote: > Hello, > I have this ind of situation. [cut] > I see in my oVirt 4.1.9 that I can import from: > > 1) VMware > > 2) VMware Virtual Appliance [cut] > - what is the best between the 2 methods if I can chose? Their Pros&Cons? > Mode 1 imports directly from the vcenter, method 2 required you to export vm to OVA and then copy to some path and import to RHV Mode 1 requires an user on vcenter only. Any operation you'll do will go through vcenter. I'm importing ~600 VM using mode 1. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From gianluca.cecchi at gmail.com Thu Feb 8 15:34:47 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Thu, 8 Feb 2018 16:34:47 +0100 Subject: [ovirt-users] Info about exporting from vSphere In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 4:28 PM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > On Thu, Feb 8, 2018 at 4:20 PM, Gianluca Cecchi > wrote: > > Hello, > > I have this ind of situation. > [cut] > > I see in my oVirt 4.1.9 that I can import from: > > > > 1) VMware > > > > 2) VMware Virtual Appliance > [cut] > > - what is the best between the 2 methods if I can chose? Their Pros&Cons? > > > > Mode 1 imports directly from the vcenter, method 2 required you to > export vm to OVA and then copy to some path and import to RHV > > Mode 1 requires an user on vcenter only. Any operation you'll do will > go through vcenter. > Thanks Luca. I asked because I see this inside the gui. https://drive.google.com/file/d/12vI9RUq9t4J--jlkqSxvG2jqylGfD2sP/view?usp=sharing Probably you do it via api and you don't need to provide ESXi credentials? Did you try also from web admin gui leaving empty the fields related to ESXi? -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Thu Feb 8 15:37:29 2018 From: msivak at redhat.com (Martin Sivak) Date: Thu, 8 Feb 2018 16:37:29 +0100 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: Andrej, this might be related to the recent fixes of yours in that area. Can you take a look please? Best regards Martin Sivak On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis wrote: > Ovirt 4.2 has been humming away quite nicely for me in the last few months, > and now I am hitting an issue when try to touch any api call that has to do > with a specific disk. This disk resides on a hyperconverged DC, and none of > the other disks seem to be affected. Here is the error thrown. > > 2018-02-08 10:13:20,005-05 ERROR > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default task-22) > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during ValidateFailure.: > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: Quota > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool > 5a497956-0380-021e-0025-00000000035e > > > > Any ideas what can be done to fix this? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From lorenzetto.luca at gmail.com Thu Feb 8 15:38:18 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Thu, 8 Feb 2018 16:38:18 +0100 Subject: [ovirt-users] Info about exporting from vSphere In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 4:34 PM, Gianluca Cecchi wrote: > > Thanks Luca. > I asked because I see this inside the gui. > https://drive.google.com/file/d/12vI9RUq9t4J--jlkqSxvG2jqylGfD2sP/view?usp=sharing > > Probably you do it via api and you don't need to provide ESXi credentials? > Did you try also from web admin gui leaving empty the fields related to > ESXi? > If you're not used to libvirt connection to vmware, all that fields could scare. Anyway you need to fill out all the values, because are needed to locate the vm you want to migrate. When using that function you'll point to a specific host connected to the given vcenter, in the given datacenter, inside a given cluster. Import function will contact vcenter and ask for all vms in shutdown status on that host, then allows you to continue with the wizard. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From akrejcir at redhat.com Thu Feb 8 16:03:16 2018 From: akrejcir at redhat.com (Andrej Krejcir) Date: Thu, 8 Feb 2018 17:03:16 +0100 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: The error message means that the data center (storage pool) where the quota is defined is different from the data center where the disk is. It seems like a bug, as it should not be possible to assign a quota to a disk from a different data center. To fix it, try setting the quota of the disk to any quota from the same data center. ?Regards, Andrej? On 8 February 2018 at 16:37, Martin Sivak wrote: > Andrej, this might be related to the recent fixes of yours in that > area. Can you take a look please? > > Best regards > > Martin Sivak > > On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis wrote: > > Ovirt 4.2 has been humming away quite nicely for me in the last few > months, > > and now I am hitting an issue when try to touch any api call that has to > do > > with a specific disk. This disk resides on a hyperconverged DC, and none > of > > the other disks seem to be affected. Here is the error thrown. > > > > 2018-02-08 10:13:20,005-05 ERROR > > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default > task-22) > > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during ValidateFailure.: > > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: Quota > > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool > > 5a497956-0380-021e-0025-00000000035e > > > > > > > > Any ideas what can be done to fix this? > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Feb 8 16:06:41 2018 From: donny at fortnebula.com (Donny Davis) Date: Thu, 8 Feb 2018 11:06:41 -0500 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: Any operation on the disk throws this error, to include changing the quota. On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir wrote: > The error message means that the data center (storage pool) where the > quota is defined is different from the data center where the disk is. > > It seems like a bug, as it should not be possible to assign a quota to a > disk from a different data center. > > To fix it, try setting the quota of the disk to any quota from the same > data center. > > ?Regards, > Andrej? > > > On 8 February 2018 at 16:37, Martin Sivak wrote: > >> Andrej, this might be related to the recent fixes of yours in that >> area. Can you take a look please? >> >> Best regards >> >> Martin Sivak >> >> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis wrote: >> > Ovirt 4.2 has been humming away quite nicely for me in the last few >> months, >> > and now I am hitting an issue when try to touch any api call that has >> to do >> > with a specific disk. This disk resides on a hyperconverged DC, and >> none of >> > the other disks seem to be affected. Here is the error thrown. >> > >> > 2018-02-08 10:13:20,005-05 ERROR >> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default >> task-22) >> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during ValidateFailure.: >> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: Quota >> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >> > 5a497956-0380-021e-0025-00000000035e >> > >> > >> > >> > Any ideas what can be done to fix this? >> > >> > _______________________________________________ >> > Users mailing list >> > Users at ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/users >> > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akrejcir at redhat.com Thu Feb 8 16:42:41 2018 From: akrejcir at redhat.com (Andrej Krejcir) Date: Thu, 8 Feb 2018 17:42:41 +0100 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: Do the operations work in the UI? If not, then the DB has to be changed manually: $ psql engine UPDATE image_storage_domain_map sd_map SET quota_id = NULL FROM images WHERE sd_map.image_id = images.image_guid AND images.image_group_id = 'ID_OF_THE_DISK'; On 8 February 2018 at 17:06, Donny Davis wrote: > Any operation on the disk throws this error, to include changing the quota. > > On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir > wrote: > >> The error message means that the data center (storage pool) where the >> quota is defined is different from the data center where the disk is. >> >> It seems like a bug, as it should not be possible to assign a quota to a >> disk from a different data center. >> >> To fix it, try setting the quota of the disk to any quota from the same >> data center. >> >> ?Regards, >> Andrej? >> >> >> On 8 February 2018 at 16:37, Martin Sivak wrote: >> >>> Andrej, this might be related to the recent fixes of yours in that >>> area. Can you take a look please? >>> >>> Best regards >>> >>> Martin Sivak >>> >>> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis >>> wrote: >>> > Ovirt 4.2 has been humming away quite nicely for me in the last few >>> months, >>> > and now I am hitting an issue when try to touch any api call that has >>> to do >>> > with a specific disk. This disk resides on a hyperconverged DC, and >>> none of >>> > the other disks seem to be affected. Here is the error thrown. >>> > >>> > 2018-02-08 10:13:20,005-05 ERROR >>> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default >>> task-22) >>> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during ValidateFailure.: >>> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: Quota >>> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >>> > 5a497956-0380-021e-0025-00000000035e >>> > >>> > >>> > >>> > Any ideas what can be done to fix this? >>> > >>> > _______________________________________________ >>> > Users mailing list >>> > Users at ovirt.org >>> > http://lists.ovirt.org/mailman/listinfo/users >>> > >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Feb 8 16:52:08 2018 From: donny at fortnebula.com (Donny Davis) Date: Thu, 8 Feb 2018 11:52:08 -0500 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: Again, any operation on the disk throws the error. The UI, API both throw the exception On Thu, Feb 8, 2018 at 11:42 AM, Andrej Krejcir wrote: > Do the operations work in the UI? > If not, then the DB has to be changed manually: > > $ psql engine > > UPDATE image_storage_domain_map sd_map > SET quota_id = NULL > FROM images > WHERE sd_map.image_id = images.image_guid > AND images.image_group_id = 'ID_OF_THE_DISK'; > > > On 8 February 2018 at 17:06, Donny Davis wrote: > >> Any operation on the disk throws this error, to include changing the >> quota. >> >> On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir >> wrote: >> >>> The error message means that the data center (storage pool) where the >>> quota is defined is different from the data center where the disk is. >>> >>> It seems like a bug, as it should not be possible to assign a quota to a >>> disk from a different data center. >>> >>> To fix it, try setting the quota of the disk to any quota from the same >>> data center. >>> >>> ?Regards, >>> Andrej? >>> >>> >>> On 8 February 2018 at 16:37, Martin Sivak wrote: >>> >>>> Andrej, this might be related to the recent fixes of yours in that >>>> area. Can you take a look please? >>>> >>>> Best regards >>>> >>>> Martin Sivak >>>> >>>> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis >>>> wrote: >>>> > Ovirt 4.2 has been humming away quite nicely for me in the last few >>>> months, >>>> > and now I am hitting an issue when try to touch any api call that has >>>> to do >>>> > with a specific disk. This disk resides on a hyperconverged DC, and >>>> none of >>>> > the other disks seem to be affected. Here is the error thrown. >>>> > >>>> > 2018-02-08 10:13:20,005-05 ERROR >>>> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default >>>> task-22) >>>> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during ValidateFailure.: >>>> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: >>>> Quota >>>> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >>>> > 5a497956-0380-021e-0025-00000000035e >>>> > >>>> > >>>> > >>>> > Any ideas what can be done to fix this? >>>> > >>>> > _______________________________________________ >>>> > Users mailing list >>>> > Users at ovirt.org >>>> > http://lists.ovirt.org/mailman/listinfo/users >>>> > >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akrejcir at redhat.com Thu Feb 8 16:51:56 2018 From: akrejcir at redhat.com (Andrej Krejcir) Date: Thu, 8 Feb 2018 17:51:56 +0100 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: Or, it should be enough to disable the quota in the data center, then change it for the disk and reenable it again. On 8 February 2018 at 17:42, Andrej Krejcir wrote: > Do the operations work in the UI? > If not, then the DB has to be changed manually: > > $ psql engine > > UPDATE image_storage_domain_map sd_map > SET quota_id = NULL > FROM images > WHERE sd_map.image_id = images.image_guid > AND images.image_group_id = 'ID_OF_THE_DISK'; > > > On 8 February 2018 at 17:06, Donny Davis wrote: > >> Any operation on the disk throws this error, to include changing the >> quota. >> >> On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir >> wrote: >> >>> The error message means that the data center (storage pool) where the >>> quota is defined is different from the data center where the disk is. >>> >>> It seems like a bug, as it should not be possible to assign a quota to a >>> disk from a different data center. >>> >>> To fix it, try setting the quota of the disk to any quota from the same >>> data center. >>> >>> ?Regards, >>> Andrej? >>> >>> >>> On 8 February 2018 at 16:37, Martin Sivak wrote: >>> >>>> Andrej, this might be related to the recent fixes of yours in that >>>> area. Can you take a look please? >>>> >>>> Best regards >>>> >>>> Martin Sivak >>>> >>>> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis >>>> wrote: >>>> > Ovirt 4.2 has been humming away quite nicely for me in the last few >>>> months, >>>> > and now I am hitting an issue when try to touch any api call that has >>>> to do >>>> > with a specific disk. This disk resides on a hyperconverged DC, and >>>> none of >>>> > the other disks seem to be affected. Here is the error thrown. >>>> > >>>> > 2018-02-08 10:13:20,005-05 ERROR >>>> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default >>>> task-22) >>>> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during ValidateFailure.: >>>> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: >>>> Quota >>>> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >>>> > 5a497956-0380-021e-0025-00000000035e >>>> > >>>> > >>>> > >>>> > Any ideas what can be done to fix this? >>>> > >>>> > _______________________________________________ >>>> > Users mailing list >>>> > Users at ovirt.org >>>> > http://lists.ovirt.org/mailman/listinfo/users >>>> > >>>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Feb 8 16:56:05 2018 From: donny at fortnebula.com (Donny Davis) Date: Thu, 8 Feb 2018 11:56:05 -0500 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: Disabling the quota for that DC did the trick. The funny part is it was never enabled. I put it in audit mode, tried a delete, got the error... and then disabled it. Worked, I am a happy camper... Thanks guys. On Thu, Feb 8, 2018 at 11:51 AM, Andrej Krejcir wrote: > Or, it should be enough to disable the quota in the data center, then > change it for the disk and reenable it again. > > On 8 February 2018 at 17:42, Andrej Krejcir wrote: > >> Do the operations work in the UI? >> If not, then the DB has to be changed manually: >> >> $ psql engine >> >> UPDATE image_storage_domain_map sd_map >> SET quota_id = NULL >> FROM images >> WHERE sd_map.image_id = images.image_guid >> AND images.image_group_id = 'ID_OF_THE_DISK'; >> >> >> On 8 February 2018 at 17:06, Donny Davis wrote: >> >>> Any operation on the disk throws this error, to include changing the >>> quota. >>> >>> On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir >>> wrote: >>> >>>> The error message means that the data center (storage pool) where the >>>> quota is defined is different from the data center where the disk is. >>>> >>>> It seems like a bug, as it should not be possible to assign a quota to >>>> a disk from a different data center. >>>> >>>> To fix it, try setting the quota of the disk to any quota from the same >>>> data center. >>>> >>>> ?Regards, >>>> Andrej? >>>> >>>> >>>> On 8 February 2018 at 16:37, Martin Sivak wrote: >>>> >>>>> Andrej, this might be related to the recent fixes of yours in that >>>>> area. Can you take a look please? >>>>> >>>>> Best regards >>>>> >>>>> Martin Sivak >>>>> >>>>> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis >>>>> wrote: >>>>> > Ovirt 4.2 has been humming away quite nicely for me in the last few >>>>> months, >>>>> > and now I am hitting an issue when try to touch any api call that >>>>> has to do >>>>> > with a specific disk. This disk resides on a hyperconverged DC, and >>>>> none of >>>>> > the other disks seem to be affected. Here is the error thrown. >>>>> > >>>>> > 2018-02-08 10:13:20,005-05 ERROR >>>>> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] (default >>>>> task-22) >>>>> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during >>>>> ValidateFailure.: >>>>> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: >>>>> Quota >>>>> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >>>>> > 5a497956-0380-021e-0025-00000000035e >>>>> > >>>>> > >>>>> > >>>>> > Any ideas what can be done to fix this? >>>>> > >>>>> > _______________________________________________ >>>>> > Users mailing list >>>>> > Users at ovirt.org >>>>> > http://lists.ovirt.org/mailman/listinfo/users >>>>> > >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Thu Feb 8 23:00:57 2018 From: donny at fortnebula.com (Donny Davis) Date: Thu, 8 Feb 2018 18:00:57 -0500 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: So now when I create a new disk on the same domain with quota disabled, I get - Cannot edit Virtual Disk. Quota is not valid. This is a new machine, created after the above issue was solved On Thu, Feb 8, 2018 at 11:56 AM, Donny Davis wrote: > Disabling the quota for that DC did the trick. The funny part is it was > never enabled. I put it in audit mode, tried a delete, got the error... and > then disabled it. > > Worked, I am a happy camper... Thanks guys. > > On Thu, Feb 8, 2018 at 11:51 AM, Andrej Krejcir > wrote: > >> Or, it should be enough to disable the quota in the data center, then >> change it for the disk and reenable it again. >> >> On 8 February 2018 at 17:42, Andrej Krejcir wrote: >> >>> Do the operations work in the UI? >>> If not, then the DB has to be changed manually: >>> >>> $ psql engine >>> >>> UPDATE image_storage_domain_map sd_map >>> SET quota_id = NULL >>> FROM images >>> WHERE sd_map.image_id = images.image_guid >>> AND images.image_group_id = 'ID_OF_THE_DISK'; >>> >>> >>> On 8 February 2018 at 17:06, Donny Davis wrote: >>> >>>> Any operation on the disk throws this error, to include changing the >>>> quota. >>>> >>>> On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir >>>> wrote: >>>> >>>>> The error message means that the data center (storage pool) where the >>>>> quota is defined is different from the data center where the disk is. >>>>> >>>>> It seems like a bug, as it should not be possible to assign a quota to >>>>> a disk from a different data center. >>>>> >>>>> To fix it, try setting the quota of the disk to any quota from the >>>>> same data center. >>>>> >>>>> ?Regards, >>>>> Andrej? >>>>> >>>>> >>>>> On 8 February 2018 at 16:37, Martin Sivak wrote: >>>>> >>>>>> Andrej, this might be related to the recent fixes of yours in that >>>>>> area. Can you take a look please? >>>>>> >>>>>> Best regards >>>>>> >>>>>> Martin Sivak >>>>>> >>>>>> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis >>>>>> wrote: >>>>>> > Ovirt 4.2 has been humming away quite nicely for me in the last few >>>>>> months, >>>>>> > and now I am hitting an issue when try to touch any api call that >>>>>> has to do >>>>>> > with a specific disk. This disk resides on a hyperconverged DC, and >>>>>> none of >>>>>> > the other disks seem to be affected. Here is the error thrown. >>>>>> > >>>>>> > 2018-02-08 10:13:20,005-05 ERROR >>>>>> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] >>>>>> (default task-22) >>>>>> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during >>>>>> ValidateFailure.: >>>>>> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: >>>>>> Quota >>>>>> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >>>>>> > 5a497956-0380-021e-0025-00000000035e >>>>>> > >>>>>> > >>>>>> > >>>>>> > Any ideas what can be done to fix this? >>>>>> > >>>>>> > _______________________________________________ >>>>>> > Users mailing list >>>>>> > Users at ovirt.org >>>>>> > http://lists.ovirt.org/mailman/listinfo/users >>>>>> > >>>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xrs444 at xrs444.net Fri Feb 9 01:30:22 2018 From: xrs444 at xrs444.net (Thomas Letherby) Date: Fri, 09 Feb 2018 01:30:22 +0000 Subject: [ovirt-users] Maximum time node can be offline. In-Reply-To: <1518084724645.71365@leedsbeckett.ac.uk> References: <1518084724645.71365@leedsbeckett.ac.uk> Message-ID: Thanks, that answers my follow up question! :) My concern is that I could have a host off-line for a month say, is that going to cause any issues? Thanks, Thomas On Thu, Feb 8, 2018 at 3:12 AM Staniforth, Paul < P.Staniforth at leedsbeckett.ac.uk> wrote: > Hello, > > you should be able to use the power saving cluster scheduling > policy. > > > > https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.2-beta/html/administration_guide/sect-Scheduling_Policies > > > Regards, > > Paul S. > ------------------------------ > *From:* users-bounces at ovirt.org on behalf of > Thomas Letherby > *Sent:* 08 February 2018 05:51 > *To:* users at ovirt.org > *Subject:* [ovirt-users] Maximum time node can be offline. > > Hello all, > > Is there a maximum length of time an Ovirt Node 4.2 based host can be > offline in a cluster before it would have issues when powered back on? > > The reason I ask is in my lab I currently have a three node cluster that > works really well, however a lot of the time I only actually need the > resources of one host, so to save power I'd like to keep the other two > offline until needed. > > I can always script them to boot once a week or so if I need to. > > Thanks, > > Thomas > To view the terms under which this email is distributed, please go to:- > http://disclaimer.leedsbeckett.ac.uk/disclaimer/disclaimer.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dd432690 at gmail.com Fri Feb 9 06:49:21 2018 From: dd432690 at gmail.com (David David) Date: Fri, 9 Feb 2018 10:49:21 +0400 Subject: [ovirt-users] IndexError python-sdk Message-ID: Hi all. python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 Issue is that I cant upload a snapshot I get IndexError when do upload_disk_snapshots.py https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk_snapshots.py Output: Traceback (most recent call last): File "snapshot_upload.py", line 298, in images_chain = get_images_chain(disk_path) File "snapshot_upload.py", line 263, in get_images_chain base_volume = [v for v in volumes_info.values() if 'full-backing-filename' not in v ][0] IndexError: list index out of range -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Fri Feb 9 07:25:15 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 9 Feb 2018 08:25:15 +0100 Subject: [ovirt-users] Maximum time node can be offline. In-Reply-To: References: <1518084724645.71365@leedsbeckett.ac.uk> Message-ID: On Fri, Feb 9, 2018 at 2:30 AM, Thomas Letherby wrote: > Thanks, that answers my follow up question! :) > > My concern is that I could have a host off-line for a month say, is that > going to cause any issues? > > Thanks, > > Thomas > > I think that if in the mean time you don't make any configuration changes and you don't update anything, there is no reason to have problems. In case of changes done, it could depend on what they are: are you thinking about any particular scenario? -------------- next part -------------- An HTML attachment was scrubbed... URL: From reznikov_aa at soskol.com Fri Feb 9 07:53:48 2018 From: reznikov_aa at soskol.com (Reznikov Alexei) Date: Fri, 9 Feb 2018 10:53:48 +0300 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf Message-ID: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> Hi all! After upgrade from ovirt 4.0 to 4.1, a have trouble add to next HostedEngine host to my cluster via webui... host add succesfully and become up, but HE not active in this host. log's from trouble host # cat agent.log > KeyError: 'Configuration value not found: file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway' # cat /etc/ovirt-hosted-engine/hosted-engine.conf ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem host_id=2 log deploy from engine in attach. trouble host: ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch ovirt-host-deploy-1.6.7-1.el7.centos.noarch vdsm-4.19.45-1.el7.centos.x86_64 CentOS Linux release 7.4.1708 (Core) engine host: ovirt-release41-4.1.9-1.el7.centos.noarch ovirt-engine-4.1.9.1-1.el7.centos.noarch CentOS Linux release 7.4.1708 (Core) Please help me fix it. Thanx, Alex. -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt-host-deploy-20180209095031-h4.lan-6f068f8b.log Type: text/x-log Size: 907421 bytes Desc: not available URL: From alexeynikolaev.post at yandex.ru Fri Feb 9 08:05:08 2018 From: alexeynikolaev.post at yandex.ru (=?utf-8?B?0J3QuNC60L7Qu9Cw0LXQsiDQkNC70LXQutGB0LXQuQ==?=) Date: Fri, 09 Feb 2018 11:05:08 +0300 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf In-Reply-To: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> Message-ID: <1047931518163508@web5j.yandex.ru> An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Fri Feb 9 08:06:17 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Fri, 9 Feb 2018 09:06:17 +0100 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf In-Reply-To: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> Message-ID: 2018-02-09 8:53 GMT+01:00 Reznikov Alexei : > Hi all! > > After upgrade from ovirt 4.0 to 4.1, a have trouble add to next > HostedEngine host to my cluster via webui... host add succesfully and > become up, but HE not active in this host. > > log's from trouble host > # cat agent.log > > KeyError: 'Configuration value not found: file=/etc/ovirt-hosted-engine/hosted-engine.conf, > key=gateway' > Adding Simone > > # cat /etc/ovirt-hosted-engine/hosted-engine.conf > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem > host_id=2 > > log deploy from engine in attach. > > trouble host: > ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch > ovirt-host-deploy-1.6.7-1.el7.centos.noarch > vdsm-4.19.45-1.el7.centos.x86_64 > CentOS Linux release 7.4.1708 (Core) > > engine host: > ovirt-release41-4.1.9-1.el7.centos.noarch > ovirt-engine-4.1.9.1-1.el7.centos.noarch > CentOS Linux release 7.4.1708 (Core) > Please help me fix it. > > Thanx, Alex. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabrice.bacchella at orange.fr Fri Feb 9 08:20:02 2018 From: fabrice.bacchella at orange.fr (Fabrice Bacchella) Date: Fri, 9 Feb 2018 09:20:02 +0100 Subject: [ovirt-users] oVirt CLI Question In-Reply-To: References: Message-ID: <58F72D4F-E381-4479-890E-1047B3A6B36E@orange.fr> > Le 8 f?vr. 2018 ? 09:44, Ondra Machacek a ?crit : >> Is this project part of oVirt distro? It looks like in state of active >> development with last updates 2 months ago. >> https://github.com/fbacchella/ovirtcmd > > No, it isn't part of oVirt distribution. > It's my projet. Do you have any question about it ? From jaganz at gmail.com Fri Feb 9 09:20:28 2018 From: jaganz at gmail.com (yayo (j)) Date: Fri, 9 Feb 2018 10:20:28 +0100 Subject: [ovirt-users] Use the "hosted_engine" data domain as data domain for others VM Message-ID: Hi, Is there any problem to use the "hosted_engine" data domain to put disk of others VM? I have created a "too big" "hosted_engine" data domain so I want to use that space... Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Fri Feb 9 09:26:25 2018 From: msivak at redhat.com (Martin Sivak) Date: Fri, 9 Feb 2018 10:26:25 +0100 Subject: [ovirt-users] Maximum time node can be offline. In-Reply-To: References: <1518084724645.71365@leedsbeckett.ac.uk> Message-ID: Hi, the hosts are almost stateless and we set up most of what is needed during activation. Hosted engine has some configuration stored locally, but that is just the path to the storage domain. I think you should be fine unless you change the network topology significantly. I would also install security updates once in while. We can even shut down the hosts for you when you configure two cluster scheduling properties: EnableAutomaticPM and HostsInReserve. HostsInReserve should be at least 1 though. It behaves like this, as long as the reserve host is empty, we shut down all the other empty hosts. And we boot another host once a VM does not fit on other used hosts and is places on the running reserve host. That would save you the power of just one host, but it would still be highly available (if hosted engine and storage allows that too). Bear in mind that single host cluster is not highly available at all. Best regards Martin Sivak On Fri, Feb 9, 2018 at 8:25 AM, Gianluca Cecchi wrote: > On Fri, Feb 9, 2018 at 2:30 AM, Thomas Letherby wrote: >> >> Thanks, that answers my follow up question! :) >> >> My concern is that I could have a host off-line for a month say, is that >> going to cause any issues? >> >> Thanks, >> >> Thomas >> > > I think that if in the mean time you don't make any configuration changes > and you don't update anything, there is no reason to have problems. > In case of changes done, it could depend on what they are: are you thinking > about any particular scenario? > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From mail at renout.nl Fri Feb 9 10:03:18 2018 From: mail at renout.nl (Renout Gerrits) Date: Fri, 9 Feb 2018 11:03:18 +0100 Subject: [ovirt-users] Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider In-Reply-To: References: Message-ID: Hi Maoz, You should not be using the engine and not the root user for the ssh keys. The actions are delegated to a host and the vdsm user. So you should set-up ssh keys for the vdsm user on one or all of the hosts (remember to select this host as proxy host in the gui). Probably the documentation should be updated to make this more clear. 1. Make the keygen for vdsm user: # sudo -u vdsm ssh-keygen 2.Do the first login to confirm the fingerprints using "yes": # sudo -u vdsm ssh root at xxx.xxx.xxx.xxx 3. Then copy the key to the KVm host running the vm: # sudo -u vdsm ssh-copy-id root at xxx.xxx.xxx.xxx 4. Now verify is vdsm can login without password or not: # sudo -u vdsm ssh root at xxx.xxx.xxx.xxx On Thu, Feb 8, 2018 at 3:12 PM, Petr Kotas wrote: > You can generate one :). There are different guides for different > platforms. > > The link I sent is the good start on where to put the keys and how to set > it up. > > Petr > > On Thu, Feb 8, 2018 at 3:09 PM, maoz zadok wrote: > >> Using the command line on the engine machine (as root) works fine. I >> don't use ssh key from the agent GUI but the authentication section (with >> root user and password), >> I think that it's a bug, I manage to migrate with TCP but I just want to >> let you know. >> >> is it possible to use ssh-key from the agent GUI? how can I get the key? >> >> On Thu, Feb 8, 2018 at 2:51 PM, Petr Kotas wrote: >> >>> Hi Maoz, >>> >>> it looks like cannot connect due to wrong setup of ssh keys. Which linux >>> are you using? >>> The guide for setting the ssh connection to libvirt is here: >>> https://wiki.libvirt.org/page/SSHSetup >>> >>> May it helps? >>> >>> Petr >>> >>> On Wed, Feb 7, 2018 at 10:53 PM, maoz zadok wrote: >>> >>>> Hello there, >>>> >>>> I'm following https://www.ovirt.org/develop/ >>>> release-management/features/virt/KvmToOvirt/ guide in order to import >>>> VMS from Libvirt to oVirt using ssh. >>>> URL: "qemu+ssh://host1.example.org/system" >>>> >>>> and get the following error: >>>> Failed to communicate with the external provider, see log for >>>> additional details. >>>> >>>> >>>> *oVirt agent log:* >>>> >>>> *- Failed to retrieve VMs information from external server >>>> qemu+ssh://XXX.XXX.XXX.XXX/system* >>>> *- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot >>>> recv data: Host key verification failed.: Connection reset by peer* >>>> >>>> >>>> >>>> *remote host sshd DEBUG log:* >>>> *Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 port >>>> 48148 on XXX.XXX.XXX.123 port 22* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version 2.0; >>>> client software version OpenSSH_7.4* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat >>>> OpenSSH* compat 0x04000000* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string >>>> SSH-2.0-OpenSSH_7.4* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode >>>> for protocol 2.0* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74 >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types: >>>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm: >>>> curve25519-sha256 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm: >>>> ecdsa-sha2-nistp256 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher: >>>> chacha20-poly1305 at openssh.com MAC: >>>> compression: none [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher: >>>> chacha20-poly1305 at openssh.com MAC: >>>> compression: none [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 >>>> need=64 dh_need=64 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 >>>> need=64 dh_need=64 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting >>>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 blocks >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: Connection closed by XXX.XXX.XXX.147 >>>> port 48148 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup* >>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child 110006* >>>> *Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Set /proc/self/oom_score_adj >>>> to 0* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 >>>> newsock 5 pipe 7 sock 8* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after dupping: >>>> 3, 3* >>>> *Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 port >>>> 48150 on XXX.XXX.XXX.123 port 22* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version 2.0; >>>> client software version OpenSSH_7.4* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat >>>> OpenSSH* compat 0x04000000* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string >>>> SSH-2.0-OpenSSH_7.4* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode >>>> for protocol 2.0* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74 >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types: >>>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm: >>>> curve25519-sha256 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm: >>>> ecdsa-sha2-nistp256 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher: >>>> chacha20-poly1305 at openssh.com MAC: >>>> compression: none [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher: >>>> chacha20-poly1305 at openssh.com MAC: >>>> compression: none [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 >>>> need=64 dh_need=64 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 >>>> need=64 dh_need=64 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting >>>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 blocks >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS >>>> [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: Connection closed by XXX.XXX.XXX.147 >>>> port 48150 [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup* >>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child 110008* >>>> *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Set /proc/self/oom_score_adj >>>> to 0* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 >>>> newsock 5 pipe 7 sock 8* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after dupping: >>>> 3, 3* >>>> *Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 port >>>> 48152 on XXX.XXX.XXX.123 port 22* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version 2.0; >>>> client software version OpenSSH_7.4* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat >>>> OpenSSH* compat 0x04000000* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string >>>> SSH-2.0-OpenSSH_7.4* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode >>>> for protocol 2.0* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74 >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types: >>>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm: >>>> curve25519-sha256 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm: >>>> ecdsa-sha2-nistp256 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher: >>>> chacha20-poly1305 at openssh.com MAC: >>>> compression: none [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher: >>>> chacha20-poly1305 at openssh.com MAC: >>>> compression: none [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 >>>> need=64 dh_need=64 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 >>>> need=64 dh_need=64 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting >>>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 blocks >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: Connection closed by XXX.XXX.XXX.147 >>>> port 48152 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup* >>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child 110010* >>>> *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Set /proc/self/oom_score_adj >>>> to 0* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 >>>> newsock 5 pipe 7 sock 8* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after dupping: >>>> 3, 3* >>>> *Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 port >>>> 48154 on XXX.XXX.XXX.123 port 22* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version 2.0; >>>> client software version OpenSSH_7.4* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat >>>> OpenSSH* compat 0x04000000* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string >>>> SSH-2.0-OpenSSH_7.4* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode >>>> for protocol 2.0* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74 >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types: >>>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm: >>>> curve25519-sha256 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm: >>>> ecdsa-sha2-nistp256 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher: >>>> chacha20-poly1305 at openssh.com MAC: >>>> compression: none [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher: >>>> chacha20-poly1305 at openssh.com MAC: >>>> compression: none [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 >>>> need=64 dh_need=64 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 >>>> need=64 dh_need=64 [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting >>>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 blocks >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS >>>> [preauth]* >>>> *Feb 7 16:38:30 XXX sshd[110011]: Connection closed by XXX.XXX.XXX.147 >>>> port 48154 [preauth]* >>>> >>>> >>>> Thank you! >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Fri Feb 9 10:06:21 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 9 Feb 2018 11:06:21 +0100 Subject: [ovirt-users] when creating VMs, I don't want hosted_storage to be an option In-Reply-To: References: <08CE269E-5CAC-4DC6-9E6C-07B7C1C95DD0@gmail.com> <67856393-8369-c4d7-2411-e55bf33f70d3@bootc.net> Message-ID: On Thu, Jun 22, 2017 at 11:37 AM, Martin Sivak wrote: > Hi, > > Chris is right. We want to remove the specialty status from that > storage domain. It is one of the highest priority items for hosted > engine right now. > > There is currently no way to hide it I am afraid. > > Best regards > > -- > Martin Sivak > SLA / oVirt > > Hello Martin (and list), any update on this item to remove specialty of hosted_engine storage? Any bugzilla RFE or pointer? I think it didn't catch 4.2, correct? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From akrejcir at redhat.com Fri Feb 9 10:33:49 2018 From: akrejcir at redhat.com (Andrej Krejcir) Date: Fri, 9 Feb 2018 11:33:49 +0100 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: The error can mean that a quota does not exist for the DC, or was saved in an invalid state. Try these steps in the UI: - Set the quota mode to Audit on the DC - Check the DC details page, quota tab, if there is a quota defined - If not, create one - If it is, try editing it and save it. The UI will save a valid quota. - Set the quota mode back to Disabled. On 9 February 2018 at 00:00, Donny Davis wrote: > So now when I create a new disk on the same domain with quota disabled, I > get > > - Cannot edit Virtual Disk. Quota is not valid. > > > This is a new machine, created after the above issue was solved > > On Thu, Feb 8, 2018 at 11:56 AM, Donny Davis wrote: > >> Disabling the quota for that DC did the trick. The funny part is it was >> never enabled. I put it in audit mode, tried a delete, got the error... and >> then disabled it. >> >> Worked, I am a happy camper... Thanks guys. >> >> On Thu, Feb 8, 2018 at 11:51 AM, Andrej Krejcir >> wrote: >> >>> Or, it should be enough to disable the quota in the data center, then >>> change it for the disk and reenable it again. >>> >>> On 8 February 2018 at 17:42, Andrej Krejcir wrote: >>> >>>> Do the operations work in the UI? >>>> If not, then the DB has to be changed manually: >>>> >>>> $ psql engine >>>> >>>> UPDATE image_storage_domain_map sd_map >>>> SET quota_id = NULL >>>> FROM images >>>> WHERE sd_map.image_id = images.image_guid >>>> AND images.image_group_id = 'ID_OF_THE_DISK'; >>>> >>>> >>>> On 8 February 2018 at 17:06, Donny Davis wrote: >>>> >>>>> Any operation on the disk throws this error, to include changing the >>>>> quota. >>>>> >>>>> On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir >>>>> wrote: >>>>> >>>>>> The error message means that the data center (storage pool) where the >>>>>> quota is defined is different from the data center where the disk is. >>>>>> >>>>>> It seems like a bug, as it should not be possible to assign a quota >>>>>> to a disk from a different data center. >>>>>> >>>>>> To fix it, try setting the quota of the disk to any quota from the >>>>>> same data center. >>>>>> >>>>>> ?Regards, >>>>>> Andrej? >>>>>> >>>>>> >>>>>> On 8 February 2018 at 16:37, Martin Sivak wrote: >>>>>> >>>>>>> Andrej, this might be related to the recent fixes of yours in that >>>>>>> area. Can you take a look please? >>>>>>> >>>>>>> Best regards >>>>>>> >>>>>>> Martin Sivak >>>>>>> >>>>>>> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis >>>>>>> wrote: >>>>>>> > Ovirt 4.2 has been humming away quite nicely for me in the last >>>>>>> few months, >>>>>>> > and now I am hitting an issue when try to touch any api call that >>>>>>> has to do >>>>>>> > with a specific disk. This disk resides on a hyperconverged DC, >>>>>>> and none of >>>>>>> > the other disks seem to be affected. Here is the error thrown. >>>>>>> > >>>>>>> > 2018-02-08 10:13:20,005-05 ERROR >>>>>>> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] >>>>>>> (default task-22) >>>>>>> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during >>>>>>> ValidateFailure.: >>>>>>> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: >>>>>>> Quota >>>>>>> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >>>>>>> > 5a497956-0380-021e-0025-00000000035e >>>>>>> > >>>>>>> > >>>>>>> > >>>>>>> > Any ideas what can be done to fix this? >>>>>>> > >>>>>>> > _______________________________________________ >>>>>>> > Users mailing list >>>>>>> > Users at ovirt.org >>>>>>> > http://lists.ovirt.org/mailman/listinfo/users >>>>>>> > >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Fri Feb 9 10:43:56 2018 From: msivak at redhat.com (Martin Sivak) Date: Fri, 9 Feb 2018 11:43:56 +0100 Subject: [ovirt-users] when creating VMs, I don't want hosted_storage to be an option In-Reply-To: References: <08CE269E-5CAC-4DC6-9E6C-07B7C1C95DD0@gmail.com> <67856393-8369-c4d7-2411-e55bf33f70d3@bootc.net> Message-ID: Hi, we got much closer to officially remove the specialty status of both the domain and the VM in 4.2 with features like Node 0 deployment (default since 4.2.1) and direct libvirtxml support in engine and HE (4.2.2 iirc). There are couple of outstanding issues: - HE needs to know how to connect all storage domains necessary for HE VM disks (not 100% related, but close) - (live) storage migration is not supported yet - HE nodes need to learn about the new connection details - changes to gluster topology are not supported yet - same reason as above - we have a bug with regards to block devices - will be fixed by https://gerrit.ovirt.org/#/c/87325/ - fencing and SPM role need to be tested a bit more to make sure we have no surprises there - old deployments might not have some data in the engine DB (https://bugzilla.redhat.com/show_bug.cgi?id=1373930) We will not be adding any additional limits as all seems to work in the usual cases and we work on removing the remaining restrictions. I am not 100% certain when it will be finished exactly, but you can use it now if you are careful (basically do not use custom mount options and do not add disks to the HE VM that would come from a different SD!!). We have two tracking bugs for the related work: https://bugzilla.redhat.com/show_bug.cgi?id=1455169 and https://bugzilla.redhat.com/show_bug.cgi?id=1393902 - most of what was needed was fixed already. Best regards Martin Sivak On Fri, Feb 9, 2018 at 11:06 AM, Gianluca Cecchi wrote: > On Thu, Jun 22, 2017 at 11:37 AM, Martin Sivak wrote: >> >> Hi, >> >> Chris is right. We want to remove the specialty status from that >> storage domain. It is one of the highest priority items for hosted >> engine right now. >> >> There is currently no way to hide it I am afraid. >> >> Best regards >> >> -- >> Martin Sivak >> SLA / oVirt >> > > Hello Martin (and list), > any update on this item to remove specialty of hosted_engine storage? > Any bugzilla RFE or pointer? I think it didn't catch 4.2, correct? > > Thanks, > Gianluca From gianluca.cecchi at gmail.com Fri Feb 9 10:46:35 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 9 Feb 2018 11:46:35 +0100 Subject: [ovirt-users] Info about exporting from vSphere In-Reply-To: References: Message-ID: On Thu, Feb 8, 2018 at 4:38 PM, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > On Thu, Feb 8, 2018 at 4:34 PM, Gianluca Cecchi > wrote: > > > > Thanks Luca. > > I asked because I see this inside the gui. > > https://drive.google.com/file/d/12vI9RUq9t4J-- > jlkqSxvG2jqylGfD2sP/view?usp=sharing > > > > Probably you do it via api and you don't need to provide ESXi > credentials? > > Did you try also from web admin gui leaving empty the fields related to > > ESXi? > > > > If you're not used to libvirt connection to vmware, all that fields > could scare. Anyway you need to fill out all the values, because are > needed to locate the vm you want to migrate. > So you are confirming that using "VMware" import option all fields are needed, ESXi hostname and credentials comprised. In fact if I don't fill and click "Load", in the ESXi fileds I get the popup notice that "this filed can't be empty"... > When using that function you'll point to a specific host connected to > the given vcenter, in the given datacenter, inside a given cluster. > Import function will contact vcenter and ask for all vms in shutdown > status on that host, then allows you to continue with the wizard. > Which "function" are you referring to? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Fri Feb 9 10:51:41 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 9 Feb 2018 11:51:41 +0100 Subject: [ovirt-users] Info about exporting from vSphere In-Reply-To: References: Message-ID: On Fri, Feb 9, 2018 at 11:46 AM, Gianluca Cecchi wrote: > On Thu, Feb 8, 2018 at 4:38 PM, Luca 'remix_tj' Lorenzetto < > lorenzetto.luca at gmail.com> wrote: > >> On Thu, Feb 8, 2018 at 4:34 PM, Gianluca Cecchi >> wrote: >> > >> > Thanks Luca. >> > I asked because I see this inside the gui. >> > https://drive.google.com/file/d/12vI9RUq9t4J--jlkqSxvG2jqylG >> fD2sP/view?usp=sharing >> > >> > Probably you do it via api and you don't need to provide ESXi >> credentials? >> > Did you try also from web admin gui leaving empty the fields related to >> > ESXi? >> > >> >> If you're not used to libvirt connection to vmware, all that fields >> could scare. Anyway you need to fill out all the values, because are >> needed to locate the vm you want to migrate. >> > > So you are confirming that using "VMware" import option all fields are > needed, ESXi hostname and credentials comprised. > In fact if I don't fill and click "Load", in the ESXi fileds I get the > popup notice that "this filed can't be empty"... > Oooops, sorry! I misread the window ;-( The "password" field is under ESXi field, but it is related to vSphere credentials, not ESXi.... In fact inside the window there is only one "password" field. I just tried and it works. in the sense that with "Load" I see the VMs inside the DC/Cluster Sorry again for the rumor -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Fri Feb 9 11:01:34 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Fri, 9 Feb 2018 12:01:34 +0100 Subject: [ovirt-users] Info about exporting from vSphere In-Reply-To: References: Message-ID: On Fri, Feb 9, 2018 at 11:46 AM, Gianluca Cecchi wrote: >> >> When using that function you'll point to a specific host connected to >> the given vcenter, in the given datacenter, inside a given cluster. >> Import function will contact vcenter and ask for all vms in shutdown >> status on that host, then allows you to continue with the wizard. > > > Which "function" are you referring to? VMware import source. On Fri, Feb 9, 2018 at 11:51 AM, Gianluca Cecchi wrote: [cut] > > Sorry again for the rumor > No problem. If you need i can provide you some python-sdk code. I've build an entire migration tool that interacts with VMware and oVirt to orchestrate the migration of vms. Once completed the migration and some cleanup, we'll release it. At the moment is just a bunch home made libraries connected each other through a main function that's called through python-rq. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From spfma.tech at e.mail.fr Fri Feb 9 11:08:08 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Fri, 09 Feb 2018 12:08:08 +0100 Subject: [ovirt-users] Multiple 'scsi' controllers with index '0'. Message-ID: <20180209110809.13B3CE2264@smtp01.mail.de> Hi, I just wanted to increase the number of CPUs for a VM and after validating, I got the following error when I try to start it: VM vm-test is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. I am sure it is a bug, but for now, what can I do in order to remove or edit conflicting devices definitions ? I need to be able to start this machine. 4.2.0.2-1.el7.centos (as I still don't manage to update the hosted engine to something newer) Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Fri Feb 9 11:21:46 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Fri, 09 Feb 2018 12:21:46 +0100 Subject: [ovirt-users] Network configuration validation error In-Reply-To: References: Message-ID: <20180209112146.2F2D0E12B8@smtp01.mail.de> Hi, Could someone explain me at least what "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." means ? As far as I know 4.2 is the most recent branch available, isn't it ? Regards Le 08-Feb-2018 09:59:32 +0100, mburman at redhat.com a crit: Not sure i understand from which version you trying to upgrade and what is the exact upgrade flow, if i got it correctly, it is seems that you upgraded the hosts to 4.2, but engine still 4.1? What exactly the upgrade steps, please explain the flow., what have you done after upgrading the hosts? to what version? Cheers) On Wed, Feb 7, 2018 at 3:00 PM, wrote: Hi, Thanks a lot for your answer. I applied some updates at node level, but I forgot to upgrade the engine ! When I try to do so I get a strange error : "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." Here are the installed packets on my nodes : python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 ovirt-imageio-common-1.2.0-1.el7.centos.noarch ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch ovirt-setup-lib-1.1.4-1.el7.centos.noarch ovirt-release42-4.2.0-1.el7.centos.noarch ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64 ovirt-host-4.2.0-1.el7.centos.x86_64 ovirt-host-deploy-1.7.0-1.el7.centos.noarch ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch ovirt-vmconsole-host-1.0.4-1.el7.noarch cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch ovirt-vmconsole-1.0.4-1.el7.noarch What I am supposed to do ? I see no newer packages available. Regards Le 07-Feb-2018 13:23:43 +0100, mburman at redhat.com a crit: Hi This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - https://bugzilla.redhat.com/show_bug.cgi?id=1528906 The no default route bug was fixed in - https://bugzilla.redhat.com/show_bug.cgi?id=1477589 Thanks, On Wed, Feb 7, 2018 at 1:15 PM, wrote: Hi, I am experiencing a new problem : when I try to modify something in the network setup on the second node (added to the cluster after installing the engine on the other one) using the Engine GUI, I get the following error when validating : must match "^\b((25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" Attribut : ipConfiguration.iPv4Addresses[0].gateway Moreover, on the general status of ther server, I have a "Host has no default route" alert. The ovirtmgmt network has a defined gateway of course, and the storage network has none because it is not required. Both server have the same setup, with different addresses of course :-) I have not been able to find anything useful in the logs. Is this a bug or am I doing something wrong ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From fromani at redhat.com Fri Feb 9 11:43:43 2018 From: fromani at redhat.com (Francesco Romani) Date: Fri, 9 Feb 2018 12:43:43 +0100 Subject: [ovirt-users] Multiple 'scsi' controllers with index '0'. In-Reply-To: <20180209110809.13B3CE2264@smtp01.mail.de> References: <20180209110809.13B3CE2264@smtp01.mail.de> Message-ID: Hi, could you please file a bug? Please attach the failing XML, you should find it pretty easily in the Vdsm logs. Thanks, On 02/09/2018 12:08 PM, spfma.tech at e.mail.fr wrote: > ? > Hi, > ? > I just wanted to increase the number of CPUs for a VM and after > validating, I got the following error when I try to start it: > ? > VM vm-test is down with error. Exit message: XML error: Multiple > 'scsi' controllers with index '0'. > ? > I am sure it is a bug, but for now, what can I do in order to remove > or edit conflicting devices definitions ? I need to be able to start > this machine. > ? > 4.2.0.2-1.el7.centos (as I still don't manage to update the hosted > engine to something newer) > ? > Regards > ? > > ------------------------------------------------------------------------ > FreeMail powered by mail.fr > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Francesco Romani Senior SW Eng., Virtualization R&D Red Hat IRC: fromani github: @fromanirh -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Fri Feb 9 12:02:34 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 9 Feb 2018 13:02:34 +0100 Subject: [ovirt-users] Use the "hosted_engine" data domain as data domain for others VM In-Reply-To: References: Message-ID: On Fri, Feb 9, 2018 at 10:20 AM, yayo (j) wrote: > Hi, > > Is there any problem to use the "hosted_engine" data domain to put disk of > others VM? > Hi, it will be the default in 4.3. It's tracked here: https://bugzilla.redhat.com/show_bug.cgi?id=1451653 I'm not aware of any specific block on 4.2 so it should work on technical side although it's not the recommended architecture on production systems. For sure you can actually face some perforce degradation of block device storage domains (iSCSI and FC) if you have a lot of inactive VMs on the hosted-engine storage domain due to https://bugzilla.redhat.com/show_bug.cgi?id=1443819 > I have created a "too big" "hosted_engine" data domain so I want to use > that space... > > Thank you > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Fri Feb 9 12:13:28 2018 From: msivak at redhat.com (Martin Sivak) Date: Fri, 9 Feb 2018 13:13:28 +0100 Subject: [ovirt-users] Use the "hosted_engine" data domain as data domain for others VM In-Reply-To: References: Message-ID: Hi, it should work in general, but there are couple of corner cases to be aware of. Hosted engine VM should have its disks only on the HE storage domain. The HE should be installed using the new Node 0 approach (default in 4.2.1+) or it must not use any custom mount options (https://bugzilla.redhat.com/show_bug.cgi?id=1373930) For all those reasons we do not recommend using it in production, but we are not aware about anything that would really block you from doing it. It just hasn't been tested and polished enough yet. Best regards Martin Sivak On Fri, Feb 9, 2018 at 1:02 PM, Simone Tiraboschi wrote: > > > On Fri, Feb 9, 2018 at 10:20 AM, yayo (j) wrote: >> >> Hi, >> >> Is there any problem to use the "hosted_engine" data domain to put disk of >> others VM? > > > Hi, > it will be the default in 4.3. > > It's tracked here: > https://bugzilla.redhat.com/show_bug.cgi?id=1451653 > > I'm not aware of any specific block on 4.2 so it should work on technical > side although it's not the recommended architecture on production systems. > For sure you can actually face some perforce degradation of block device > storage domains (iSCSI and FC) if you have a lot of inactive VMs on the > hosted-engine storage domain due to > https://bugzilla.redhat.com/show_bug.cgi?id=1443819 > > >> >> I have created a "too big" "hosted_engine" data domain so I want to use >> that space... >> >> Thank you >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From stirabos at redhat.com Fri Feb 9 12:17:19 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Fri, 9 Feb 2018 13:17:19 +0100 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf In-Reply-To: References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> Message-ID: On Fri, Feb 9, 2018 at 9:06 AM, Sandro Bonazzola wrote: > > > 2018-02-09 8:53 GMT+01:00 Reznikov Alexei : > >> Hi all! >> >> After upgrade from ovirt 4.0 to 4.1, a have trouble add to next >> HostedEngine host to my cluster via webui... host add succesfully and >> become up, but HE not active in this host. >> >> log's from trouble host >> # cat agent.log >> > KeyError: 'Configuration value not found: file=/etc/ovirt-hosted-engine/hosted-engine.conf, >> key=gateway' >> > > Adding Simone > It shouldn't happen. I suspect that something went wrong creating the configuration volume on the shared storage at the end of the deployment. Alexei, can both of you attach you hosted-engine-setup logs? Can you please check what happens on hosted-engine --get-shared-config gateway Thanks > > > >> >> # cat /etc/ovirt-hosted-engine/hosted-engine.conf >> ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem >> host_id=2 >> >> log deploy from engine in attach. >> >> trouble host: >> ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch >> ovirt-host-deploy-1.6.7-1.el7.centos.noarch >> vdsm-4.19.45-1.el7.centos.x86_64 >> CentOS Linux release 7.4.1708 (Core) >> >> engine host: >> ovirt-release41-4.1.9-1.el7.centos.noarch >> ovirt-engine-4.1.9.1-1.el7.centos.noarch >> CentOS Linux release 7.4.1708 (Core) > > >> Please help me fix it. >> >> Thanx, Alex. >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA > > TRIED. TESTED. TRUSTED. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Fri Feb 9 12:28:52 2018 From: donny at fortnebula.com (Donny Davis) Date: Fri, 9 Feb 2018 07:28:52 -0500 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: Error while executing action: Cannot edit Quota. Quota is not valid. On Fri, Feb 9, 2018 at 5:33 AM, Andrej Krejcir wrote: > The error can mean that a quota does not exist for the DC, or was saved in > an invalid state. > > Try these steps in the UI: > - Set the quota mode to Audit on the DC > - Check the DC details page, quota tab, if there is a quota defined > - If not, create one > - If it is, try editing it and save it. The UI will save a valid quota. > > - Set the quota mode back to Disabled. > > On 9 February 2018 at 00:00, Donny Davis wrote: > >> So now when I create a new disk on the same domain with quota disabled, I >> get >> >> - Cannot edit Virtual Disk. Quota is not valid. >> >> >> This is a new machine, created after the above issue was solved >> >> On Thu, Feb 8, 2018 at 11:56 AM, Donny Davis >> wrote: >> >>> Disabling the quota for that DC did the trick. The funny part is it was >>> never enabled. I put it in audit mode, tried a delete, got the error... and >>> then disabled it. >>> >>> Worked, I am a happy camper... Thanks guys. >>> >>> On Thu, Feb 8, 2018 at 11:51 AM, Andrej Krejcir >>> wrote: >>> >>>> Or, it should be enough to disable the quota in the data center, then >>>> change it for the disk and reenable it again. >>>> >>>> On 8 February 2018 at 17:42, Andrej Krejcir >>>> wrote: >>>> >>>>> Do the operations work in the UI? >>>>> If not, then the DB has to be changed manually: >>>>> >>>>> $ psql engine >>>>> >>>>> UPDATE image_storage_domain_map sd_map >>>>> SET quota_id = NULL >>>>> FROM images >>>>> WHERE sd_map.image_id = images.image_guid >>>>> AND images.image_group_id = 'ID_OF_THE_DISK'; >>>>> >>>>> >>>>> On 8 February 2018 at 17:06, Donny Davis wrote: >>>>> >>>>>> Any operation on the disk throws this error, to include changing the >>>>>> quota. >>>>>> >>>>>> On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir >>>>>> wrote: >>>>>> >>>>>>> The error message means that the data center (storage pool) where >>>>>>> the quota is defined is different from the data center where the disk is. >>>>>>> >>>>>>> It seems like a bug, as it should not be possible to assign a quota >>>>>>> to a disk from a different data center. >>>>>>> >>>>>>> To fix it, try setting the quota of the disk to any quota from the >>>>>>> same data center. >>>>>>> >>>>>>> ?Regards, >>>>>>> Andrej? >>>>>>> >>>>>>> >>>>>>> On 8 February 2018 at 16:37, Martin Sivak wrote: >>>>>>> >>>>>>>> Andrej, this might be related to the recent fixes of yours in that >>>>>>>> area. Can you take a look please? >>>>>>>> >>>>>>>> Best regards >>>>>>>> >>>>>>>> Martin Sivak >>>>>>>> >>>>>>>> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis >>>>>>>> wrote: >>>>>>>> > Ovirt 4.2 has been humming away quite nicely for me in the last >>>>>>>> few months, >>>>>>>> > and now I am hitting an issue when try to touch any api call that >>>>>>>> has to do >>>>>>>> > with a specific disk. This disk resides on a hyperconverged DC, >>>>>>>> and none of >>>>>>>> > the other disks seem to be affected. Here is the error thrown. >>>>>>>> > >>>>>>>> > 2018-02-08 10:13:20,005-05 ERROR >>>>>>>> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] >>>>>>>> (default task-22) >>>>>>>> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during >>>>>>>> ValidateFailure.: >>>>>>>> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: >>>>>>>> Quota >>>>>>>> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >>>>>>>> > 5a497956-0380-021e-0025-00000000035e >>>>>>>> > >>>>>>>> > >>>>>>>> > >>>>>>>> > Any ideas what can be done to fix this? >>>>>>>> > >>>>>>>> > _______________________________________________ >>>>>>>> > Users mailing list >>>>>>>> > Users at ovirt.org >>>>>>>> > http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> > >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Fri Feb 9 12:49:43 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Fri, 09 Feb 2018 13:49:43 +0100 Subject: [ovirt-users] Multiple 'scsi' controllers with index '0'. In-Reply-To: References: Message-ID: <20180209124943.B145BE2262@smtp01.mail.de> I have just done it. Is it possible to tweak this XML file (where ?) in order to get a working VM ? Regards Le 09-Feb-2018 12:44:08 +0100, fromani at redhat.com a crit: Hi, could you please file a bug? Please attach the failing XML, you should find it pretty easily in the Vdsm logs. Thanks, On 02/09/2018 12:08 PM, spfma.tech at e.mail.fr wrote: Hi, I just wanted to increase the number of CPUs for a VM and after validating, I got the following error when I try to start it: VM vm-test is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. I am sure it is a bug, but for now, what can I do in order to remove or edit conflicting devices definitions ? I need to be able to start this machine. 4.2.0.2-1.el7.centos (as I still don't manage to update the hosted engine to something newer) Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Francesco Romani Senior SW Eng., Virtualization R&D Red Hat IRC: fromani github: @fromanirh ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From akrejcir at redhat.com Fri Feb 9 13:02:13 2018 From: akrejcir at redhat.com (Andrej Krejcir) Date: Fri, 9 Feb 2018 14:02:13 +0100 Subject: [ovirt-users] Cannot Remove Disk In-Reply-To: References: Message-ID: The last workaround I can think of, is to set the quota mode to Audit, create a new quota and use this new quota for the new disk Please, can you open a bug and include steps how to get to this state? Thanks On 9 February 2018 at 13:28, Donny Davis wrote: > Error while executing action: Cannot edit Quota. Quota is not valid. > > On Fri, Feb 9, 2018 at 5:33 AM, Andrej Krejcir > wrote: > >> The error can mean that a quota does not exist for the DC, or was saved >> in an invalid state. >> >> Try these steps in the UI: >> - Set the quota mode to Audit on the DC >> - Check the DC details page, quota tab, if there is a quota defined >> - If not, create one >> - If it is, try editing it and save it. The UI will save a valid quota. >> >> - Set the quota mode back to Disabled. >> >> On 9 February 2018 at 00:00, Donny Davis wrote: >> >>> So now when I create a new disk on the same domain with quota disabled, >>> I get >>> >>> - Cannot edit Virtual Disk. Quota is not valid. >>> >>> >>> This is a new machine, created after the above issue was solved >>> >>> On Thu, Feb 8, 2018 at 11:56 AM, Donny Davis >>> wrote: >>> >>>> Disabling the quota for that DC did the trick. The funny part is it was >>>> never enabled. I put it in audit mode, tried a delete, got the error... and >>>> then disabled it. >>>> >>>> Worked, I am a happy camper... Thanks guys. >>>> >>>> On Thu, Feb 8, 2018 at 11:51 AM, Andrej Krejcir >>>> wrote: >>>> >>>>> Or, it should be enough to disable the quota in the data center, then >>>>> change it for the disk and reenable it again. >>>>> >>>>> On 8 February 2018 at 17:42, Andrej Krejcir >>>>> wrote: >>>>> >>>>>> Do the operations work in the UI? >>>>>> If not, then the DB has to be changed manually: >>>>>> >>>>>> $ psql engine >>>>>> >>>>>> UPDATE image_storage_domain_map sd_map >>>>>> SET quota_id = NULL >>>>>> FROM images >>>>>> WHERE sd_map.image_id = images.image_guid >>>>>> AND images.image_group_id = 'ID_OF_THE_DISK'; >>>>>> >>>>>> >>>>>> On 8 February 2018 at 17:06, Donny Davis >>>>>> wrote: >>>>>> >>>>>>> Any operation on the disk throws this error, to include changing the >>>>>>> quota. >>>>>>> >>>>>>> On Thu, Feb 8, 2018 at 11:03 AM, Andrej Krejcir >>>>>> > wrote: >>>>>>> >>>>>>>> The error message means that the data center (storage pool) where >>>>>>>> the quota is defined is different from the data center where the disk is. >>>>>>>> >>>>>>>> It seems like a bug, as it should not be possible to assign a quota >>>>>>>> to a disk from a different data center. >>>>>>>> >>>>>>>> To fix it, try setting the quota of the disk to any quota from the >>>>>>>> same data center. >>>>>>>> >>>>>>>> ?Regards, >>>>>>>> Andrej? >>>>>>>> >>>>>>>> >>>>>>>> On 8 February 2018 at 16:37, Martin Sivak >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Andrej, this might be related to the recent fixes of yours in that >>>>>>>>> area. Can you take a look please? >>>>>>>>> >>>>>>>>> Best regards >>>>>>>>> >>>>>>>>> Martin Sivak >>>>>>>>> >>>>>>>>> On Thu, Feb 8, 2018 at 4:18 PM, Donny Davis >>>>>>>>> wrote: >>>>>>>>> > Ovirt 4.2 has been humming away quite nicely for me in the last >>>>>>>>> few months, >>>>>>>>> > and now I am hitting an issue when try to touch any api call >>>>>>>>> that has to do >>>>>>>>> > with a specific disk. This disk resides on a hyperconverged DC, >>>>>>>>> and none of >>>>>>>>> > the other disks seem to be affected. Here is the error thrown. >>>>>>>>> > >>>>>>>>> > 2018-02-08 10:13:20,005-05 ERROR >>>>>>>>> > [org.ovirt.engine.core.bll.storage.disk.RemoveDiskCommand] >>>>>>>>> (default task-22) >>>>>>>>> > [7b48d1ec-53a7-497a-af8e-938f30a321cf] Error during >>>>>>>>> ValidateFailure.: >>>>>>>>> > org.ovirt.engine.core.bll.quota.InvalidQuotaParametersException: >>>>>>>>> Quota >>>>>>>>> > 6156b8dd-50c9-4e8f-b1f3-4a6449b02c7b does not match storage pool >>>>>>>>> > 5a497956-0380-021e-0025-00000000035e >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > >>>>>>>>> > Any ideas what can be done to fix this? >>>>>>>>> > >>>>>>>>> > _______________________________________________ >>>>>>>>> > Users mailing list >>>>>>>>> > Users at ovirt.org >>>>>>>>> > http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> > >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Fri Feb 9 13:04:18 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 9 Feb 2018 14:04:18 +0100 Subject: [ovirt-users] Info about windows guest performance Message-ID: Hello, while in my activities to accomplish migration of a Windows 2008 R2 VM (with an Oracle RDBMS inside) from vSphere to oVirt, I'm going to check performance related things. Up to now I only ran Windows guests inside my laptops and not inside an oVirt infrastructure. Now I successfully migrated this kind of VM to oVirt 4.1.9. The guest had an LSI logic sas controller. Inside the oVirt host that I used as proxy (for VMware virt-v2v) I initially didn't have the virtio-win rpm. I presume that has been for this reason that the oVirt guest has been configured with IDE disks... Can you confirm? For this test I started with ide, then added a virtio-scsi disk and then changed also the boot disk to virtio-scsi and all now goes well, with also ovirt-guest-tools-iso-4.1-3 provided iso used to install qxl and so on... So far so good. I found this bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=1277353 where it seems that " For optimum I/O performance it's critical to make sure that Windows guests use the Hyper-V reference counter feature. QEMU command line should include -cpu ...,hv_time and -no-hpet " Analyzing my command line I see the "-no-hpet" but I dont see the "hv_time" See below full comand. Any hints? Thanks, Gianluca /usr/libexec/qemu-kvm -name guest=testmig,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-12-testmig/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Westmere,vmx=on -m size=4194304k,slots=16,maxmem=16777216k -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-1,mem=4096 -uuid x-y-z-x-y -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-4.1708.el7.centos,serial=xx,uuid=yy -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-12-testmig/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2018-02-09T12:41:41,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/2de93ee3-7d6e-4a10-88c4-abc7a11fb687/a9f4e35b-4aa0-45e8-b775-1a046d1851aa,format=qcow2,if=none,id=drive-scsi0-0-0-1,serial=2de93ee3-7d6e-4a10-88c4-abc7a11fb687,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1 -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/f821da0a-cec7-457c-88a4-f83f33404e65/0d0c4244-f184-4eaa-b5bf-8dc65c7069bb,format=raw,if=none,id=drive-scsi0-0-0-0,serial=f821da0a-cec7-457c-88a4-f83f33404e65,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,fd=30,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=00:50:56:9d:c9:29,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.4.192.32,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -msg timestamp=on -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Fri Feb 9 13:29:38 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 9 Feb 2018 14:29:38 +0100 Subject: [ovirt-users] Multiple 'scsi' controllers with index '0'. In-Reply-To: <20180209124943.B145BE2262@smtp01.mail.de> References: <20180209124943.B145BE2262@smtp01.mail.de> Message-ID: Il 09 Feb 2018 13:50, ha scritto: I have just done it. Is it possible to tweak this XML file (where ?) in order to get a working VM ? Regards Le 09-Feb-2018 12:44:08 +0100, fromani at redhat.com a ?crit: Hi, could you please file a bug? Please attach the failing XML, you should find it pretty easily in the Vdsm logs. Thanks, On 02/09/2018 12:08 PM, spfma.tech at e.mail.fr wrote: Hi, I just wanted to increase the number of CPUs for a VM and after validating, I got the following error when I try to start it: VM vm-test is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. I am sure it is a bug, but for now, what can I do in order to remove or edit conflicting devices definitions ? I need to be able to start this machine. 4.2.0.2-1.el7.centos (as I still don't manage to update the hosted engine to something newer) Regards ------------------------------ FreeMail powered by mail.fr _______________________________________________ Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users -- Francesco Romani Senior SW Eng., Virtualization R&D Red Hat IRC: fromani github: @fromanirh ------------------------------ FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users I seem to remember a similar problem and that deactivating disks of the VM and the then activating them again corrected the problem. Or in case that doesn't work, try to remove disks and Then readd from the floating disk pane... Hih, gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.lloyd at keele.ac.uk Fri Feb 9 14:06:35 2018 From: g.lloyd at keele.ac.uk (Gary Lloyd) Date: Fri, 9 Feb 2018 14:06:35 +0000 Subject: [ovirt-users] Ovirt 3.6 to 4.2 upgrade Message-ID: Hi Is it possible/supported to upgrade from Ovirt 3.6 straight to Ovirt 4.2 ? Does live migration still function between the older vdsm nodes and vdsm nodes with software built against Ovirt 4.2 ? We changed a couple of the vdsm python files to enable iscsi multipath on direct luns. (It's a fairly simple change to a couple of the python files). We've been running it this way since 2012 (Ovirt 3.2). Many Thanks *Gary Lloyd* ________________________________________________ I.T. Systems:Keele University Finance & IT Directorate Keele:Staffs:IC1 Building:ST5 5NB:UK +44 1782 733063 <%2B44%201782%20733073> ________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Fri Feb 9 14:10:06 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Fri, 9 Feb 2018 15:10:06 +0100 Subject: [ovirt-users] Ovirt 3.6 to 4.2 upgrade In-Reply-To: References: Message-ID: Hello Gary, as far as i know upgrade path isn't direct and you have to migrate first to 4.0, then 4.1 and finally 4.2. You can migrate vms from 3.6 to 4.0 IIRC. But don't know if mixing 4.2 and 3.6 it's possible. Luca On Fri, Feb 9, 2018 at 3:06 PM, Gary Lloyd wrote: > Hi > > Is it possible/supported to upgrade from Ovirt 3.6 straight to Ovirt 4.2 ? > Does live migration still function between the older vdsm nodes and vdsm > nodes with software built against Ovirt 4.2 ? > > We changed a couple of the vdsm python files to enable iscsi multipath on > direct luns. > (It's a fairly simple change to a couple of the python files). > > We've been running it this way since 2012 (Ovirt 3.2). > > Many Thanks > > Gary Lloyd > ________________________________________________ > I.T. Systems:Keele University > Finance & IT Directorate > Keele:Staffs:IC1 Building:ST5 5NB:UK > +44 1782 733063 > ________________________________________________ > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From michal.skrivanek at redhat.com Fri Feb 9 15:25:34 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Fri, 9 Feb 2018 16:25:34 +0100 Subject: [ovirt-users] Info about windows guest performance In-Reply-To: References: Message-ID: <0F23B85A-58FB-40E1-B10F-F12E78ACF806@redhat.com> > On 9 Feb 2018, at 14:04, Gianluca Cecchi wrote: > > Hello, > while in my activities to accomplish migration of a Windows 2008 R2 VM (with an Oracle RDBMS inside) from vSphere to oVirt, I'm going to check performance related things. > > Up to now I only ran Windows guests inside my laptops and not inside an oVirt infrastructure. > > Now I successfully migrated this kind of VM to oVirt 4.1.9. > The guest had an LSI logic sas controller. Inside the oVirt host that I used as proxy (for VMware virt-v2v) I initially didn't have the virtio-win rpm. > I presume that has been for this reason that the oVirt guest has been configured with IDE disks? yes you won?t get any decent performance unless you use virtio drivers. Either virtio-block or virtio-scsi > Can you confirm? > > For this test I started with ide, then added a virtio-scsi disk and then changed also the boot disk to virtio-scsi and all now goes well, with also ovirt-guest-tools-iso-4.1-3 provided iso used to install qxl and so on... > > So far so good. > I found this bugzilla: > https://bugzilla.redhat.com/show_bug.cgi?id=1277353 > > where it seems that > > " > For optimum I/O performance it's critical to make sure that Windows guests use the Hyper-V reference counter feature. QEMU command line should include > > -cpu ...,hv_time > > and > > -no-hpet > " > Analyzing my command line I see the "-no-hpet" but I dont see the "hv_time" > See below full comand. > Any hints? What OS type do you have set for that VM? Make sure it matches the Windows version. That enables the hyperv enlightenments settings Thanks, michal > Thanks, > Gianluca > > /usr/libexec/qemu-kvm > -name guest=testmig,debug-threads=on > -S > -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-12-testmig/master-key.aes > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off > -cpu Westmere,vmx=on > -m size=4194304k,slots=16,maxmem=16777216k > -realtime mlock=off > -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 > -numa node,nodeid=0,cpus=0-1,mem=4096 > -uuid x-y-z-x-y > -smbios type=1,manufacturer=oVirt,product=oVirt > Node,version=7-4.1708.el7.centos,serial=xx,uuid=yy > -no-user-config > -nodefaults > -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-12-testmig/monitor.sock,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control > -rtc base=2018-02-09T12:41:41,driftfix=slew > -global kvm-pit.lost_tick_policy=delay > -no-hpet > -no-shutdown > -boot strict=on > -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 > -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 > -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 > -drive if=none,id=drive-ide0-1-0,readonly=on > -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 > -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/2de93ee3-7d6e-4a10-88c4-abc7a11fb687/a9f4e35b-4aa0-45e8-b775-1a046d1851aa,format=qcow2,if=none,id=drive-scsi0-0-0-1,serial=2de93ee3-7d6e-4a10-88c4-abc7a11fb687,cache=none,werror=stop,rerror=stop,aio=native > -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1 > -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/f821da0a-cec7-457c-88a4-f83f33404e65/0d0c4244-f184-4eaa-b5bf-8dc65c7069bb,format=raw,if=none,id=drive-scsi0-0-0-0,serial=f821da0a-cec7-457c-88a4-f83f33404e65,cache=none,werror=stop,rerror=stop,aio=native > -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 > -netdev tap,fd=30,id=hostnet0 > -device e1000,netdev=hostnet0,id=net0,mac=00:50:56:9d:c9:29,bus=pci.0,addr=0x3 > -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.com.redhat.rhevm.vdsm,server,nowait > -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm > -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.org.qemu.guest_agent.0,server,nowait > -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 > -chardev spicevmc,id=charchannel2,name=vdagent > -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 > -spice tls-port=5900,addr=10.4.192.32,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on > -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 > -msg timestamp=on > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Fri Feb 9 15:32:07 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 9 Feb 2018 16:32:07 +0100 Subject: [ovirt-users] Info about windows guest performance In-Reply-To: <0F23B85A-58FB-40E1-B10F-F12E78ACF806@redhat.com> References: <0F23B85A-58FB-40E1-B10F-F12E78ACF806@redhat.com> Message-ID: On Fri, Feb 9, 2018 at 4:25 PM, Michal Skrivanek < michal.skrivanek at redhat.com> wrote: > > Analyzing my command line I see the "-no-hpet" but I dont see the "hv_time" > > See below full comand. > > Any hints? > > > What OS type do you have set for that VM? Make sure it matches the Windows > version. That enables the hyperv enlightenments settings > > Thanks, > michal > If I edit the VM, in general settings I see "Other OS" as operating system. In General subtab after selecting the VM in "Virtual Machines" tab I again see "Other OS" in "Operating System" and the field "Origin" filled with the value "VMware" During virt-v2v it seems it was recognized as Windows 2008 though... libguestfs: trace: v2v: hivex_value_utf8 = "Windows Server 2008 R2 Enterprise" libguestfs: trace: v2v: hivex_value_key 11809408 I can send all the log if it can help. Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Fri Feb 9 17:36:47 2018 From: rightkicktech at gmail.com (Alex K) Date: Fri, 9 Feb 2018 19:36:47 +0200 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: Hi all, In case you need any further logs let me know. Thanx for the time. Alex On Thu, Feb 8, 2018 at 9:41 AM, Alex K wrote: > Hi Shani, > > Didn't notice that. > I am attaching later vdsm logs. > > Thanx, > Alex > > On Wed, Feb 7, 2018 at 5:31 PM, Shani Leviim wrote: > >> Hi Alex, >> Sorry for the mail's delay. >> >> From a brief look at your logs, I've noticed that the error you've got at >> the engine's log was logged at 2018-02-03 00:22:56, >> while your vdsm's log ends at 2018-02-03 00:01:01. >> Is there a way you can reproduce a fuller vdsm log? >> >> >> *Regards,* >> >> *Shani Leviim* >> >> On Sat, Feb 3, 2018 at 5:41 PM, Alex K wrote: >> >>> Attaching vdm log from host that trigerred the error, where the Vm that >>> was being cloned was running at that time. >>> >>> thanx, >>> Alex >>> >>> On Sat, Feb 3, 2018 at 5:20 PM, Yaniv Kaul wrote: >>> >>>> >>>> >>>> On Feb 3, 2018 3:24 PM, "Alex K" wrote: >>>> >>>> Hi All, >>>> >>>> I have reproduced the backups failure. The VM that failed is named >>>> Win-FileServer and is a Windows 2016 server 64bit with 300GB of disk. >>>> During the cloning step the VM went unresponsive and I had to >>>> stop/start it. >>>> I am attaching the logs.I have another VM with same OS (named DC-Server >>>> within the logs) but with smaller disk (60GB) which does not give any error >>>> when it is cloned. >>>> I see a line: >>>> >>>> EVENT_ID: VDS_BROKER_COMMAND_FAILURE(10,802), Correlation ID: null, >>>> Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: VDSM >>>> v2.sitedomain command SnapshotVDS failed: Message timeout which can be >>>> caused by communication issues >>>> >>>> >>>> I suggest adding relevant vdsm.log as well. >>>> Y. >>>> >>>> >>>> I appreciate any advise why I am facing such issue with the backups. >>>> >>>> thanx, >>>> Alex >>>> >>>> On Tue, Jan 30, 2018 at 12:49 AM, Alex K >>>> wrote: >>>> >>>>> Ok. I will reproduce and collect logs. >>>>> >>>>> Thanx, >>>>> Alex >>>>> >>>>> On Jan 29, 2018 20:21, "Mahdi Adnan" wrote: >>>>> >>>>> I have Windows VMs, both client and server. >>>>> if you provide the engine.log file we might have a look at it. >>>>> >>>>> >>>>> -- >>>>> >>>>> Respectfully >>>>> *Mahdi A. Mahdi* >>>>> >>>>> ------------------------------ >>>>> *From:* Alex K >>>>> *Sent:* Monday, January 29, 2018 5:40 PM >>>>> *To:* Mahdi Adnan >>>>> *Cc:* users >>>>> *Subject:* Re: [ovirt-users] Ovirt backups lead to unresponsive VM >>>>> >>>>> Hi, >>>>> >>>>> I have observed this logged at host when the issue occurs: >>>>> >>>>> VDSM command GetStoragePoolInfoVDS failed: Connection reset by peer >>>>> >>>>> or >>>>> >>>>> VDSM host.domain command GetStatsVDS failed: Connection reset by peer >>>>> >>>>> At engine logs have not been able to correlate. >>>>> >>>>> Are you hosting Windows 2016 server and Windows 10 VMs? >>>>> The weird is that I have same setup on other clusters with no issues. >>>>> >>>>> Thanx, >>>>> Alex >>>>> >>>>> On Sun, Jan 28, 2018 at 9:21 PM, Mahdi Adnan >>>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> We have a cluster of 17 nodes, backed by GlusterFS storage, and using >>>>> this same script for backup. >>>>> we have no issues with it so far. >>>>> have you checked engine log file ? >>>>> >>>>> >>>>> -- >>>>> >>>>> Respectfully >>>>> *Mahdi A. Mahdi* >>>>> >>>>> ------------------------------ >>>>> *From:* users-bounces at ovirt.org on behalf >>>>> of Alex K >>>>> *Sent:* Wednesday, January 24, 2018 4:18 PM >>>>> *To:* users >>>>> *Subject:* [ovirt-users] Ovirt backups lead to unresponsive VM >>>>> >>>>> Hi all, >>>>> >>>>> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup >>>>> on top glusterfs. >>>>> On some VMs (especially one Windows server 2016 64bit with 500 GB of >>>>> disk). Guest agents are installed at VMs. i almost always observe that >>>>> during the backup of the VM the VM is rendered unresponsive (dashboard >>>>> shows a question mark at the VM status and VM does not respond to ping or >>>>> to anything). >>>>> >>>>> For scheduled backups I use: >>>>> >>>>> https://github.com/wefixit-AT/oVirtBackup >>>>> >>>>> The script does the following: >>>>> >>>>> 1. snapshot VM (this is done ok without any failure) >>>>> >>>>> 2. Clone snapshot (this steps renders the VM unresponsive) >>>>> >>>>> 3. Export Clone >>>>> >>>>> 4. Delete clone >>>>> >>>>> 5. Delete snapshot >>>>> >>>>> >>>>> Do you have any similar experience? Any suggestions to address this? >>>>> >>>>> I have never seen such issue with hosted Linux VMs. >>>>> >>>>> The cluster has enough storage to accommodate the clone. >>>>> >>>>> >>>>> Thanx, >>>>> >>>>> Alex >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Fri Feb 9 18:13:51 2018 From: rightkicktech at gmail.com (Alex K) Date: Fri, 9 Feb 2018 20:13:51 +0200 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf In-Reply-To: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> Message-ID: Hi, did you select "Deploy" when adding the new host? See attached. [image: Inline image 2] Thanx, Alex On Fri, Feb 9, 2018 at 9:53 AM, Reznikov Alexei wrote: > Hi all! > > After upgrade from ovirt 4.0 to 4.1, a have trouble add to next > HostedEngine host to my cluster via webui... host add succesfully and > become up, but HE not active in this host. > > log's from trouble host > # cat agent.log > > KeyError: 'Configuration value not found: file=/etc/ovirt-hosted-engine/hosted-engine.conf, > key=gateway' > > # cat /etc/ovirt-hosted-engine/hosted-engine.conf > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem > host_id=2 > > log deploy from engine in attach. > > trouble host: > ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch > ovirt-host-deploy-1.6.7-1.el7.centos.noarch > vdsm-4.19.45-1.el7.centos.x86_64 > CentOS Linux release 7.4.1708 (Core) > > engine host: > ovirt-release41-4.1.9-1.el7.centos.noarch > ovirt-engine-4.1.9.1-1.el7.centos.noarch > CentOS Linux release 7.4.1708 (Core) > > Please help me fix it. > > Thanx, Alex. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Deploy.png Type: image/png Size: 16466 bytes Desc: not available URL: From vincent at epicenergy.ca Fri Feb 9 18:55:13 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Fri, 9 Feb 2018 10:55:13 -0800 Subject: [ovirt-users] Virt-viewer not working over VPN In-Reply-To: References: Message-ID: Hi, I asked this on the virt-viewer list, but it appears to be dead, so my apologies if this isn't the right place for this question. When I access my vm's locally using virt-viewer on windows clients, everything works fine, spice or vnc. When I access the same vm's remotely over a site-to-site VPN (setup between the two firewalls), it fails with an error: unable to connect to libvirt with uri: [none]. Similarly I cannot connect in a browser-based vnc session (cannot connect to host). I can resolve the DNS of the server from my remote client (domain override in the firewall pointing to the DNS server locally) and everything else I do seems completely unaware of the vpn link (SSH, RDP, etc). For example connecting to https://ovirt-enginr.mydomain.com works as expected. The only function not working remotely is virt-viewer. Any clues would be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From f.thommen at dkfz-heidelberg.de Fri Feb 9 19:11:33 2018 From: f.thommen at dkfz-heidelberg.de (Frank Thommen) Date: Fri, 9 Feb 2018 20:11:33 +0100 Subject: [ovirt-users] Virt-viewer not working over VPN In-Reply-To: References: Message-ID: <15452e9e-3130-6794-4eed-74a1563ecb1a@dkfz-heidelberg.de> On 02/09/2018 07:55 PM, Vincent Royer wrote: > Hi, I asked this on the virt-viewer list, but it appears to be dead, so > my apologies if this isn't the right place for this question. > > When I access my vm's locally using virt-viewer on windows clients, > everything works fine, spice or vnc. > > When I access the same vm's remotely over a site-to-site VPN (setup > between the two firewalls), it fails with an error: unable to connect to > libvirt with uri: [none].? Similarly I cannot connect in a browser-based > vnc session (cannot connect to host). > > I can resolve the DNS of the server from my remote client (domain > override in the firewall pointing to the DNS server locally) and > everything else I do seems completely unaware of the vpn link (SSH, RDP, > etc).? For example connecting to https://ovirt-enginr.mydomain.com works > as expected.? ?The only function not working remotely is virt-viewer. > > Any clues would be appreciated! Probably not all ports have been made accessible for VPN users in your internal network. E.g. our VPN setup blocks most ports with the exception of the usual SSH and http/s ports. Hence virt-viewer doesn't work. If you have the possibility, connect to a virtual desktop within your organization and run your session there. frank From jlawrence at squaretrade.com Fri Feb 9 19:17:42 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Fri, 9 Feb 2018 11:17:42 -0800 Subject: [ovirt-users] 4.2 aaa LDAP setup issue Message-ID: <776DB316-C6A5-4A64-88CA-88A92AE5F7B7@squaretrade.com> Hello, I'm bringing up a new 4.2 cluster and would like to use LDAP auth. Our LDAP servers are fine and function normally for a number of other services, but I can't get this working. Our LDAP setup requires startTLS and a login. That last bit seems to be where the trouble is. After ovirt-engine-extension-aaa-ldap-setup asks for the cert and I pass it the path to the same cert used via nslcd/PAM for logging in to the host, it replies: [ INFO ] Connecting to LDAP using 'ldap://x.squaretrade.com:389' [ INFO ] Executing startTLS [WARNING] Cannot connect using 'ldap://x.squaretrade.com:389': {'info': 'authentication required', 'desc': 'Server is unwilling to perform'} [ ERROR ] Cannot connect using any of available options "Unwilling to perform" makes me think -aaa-ldap-setup is trying something the backend doesn't support, but I'm having trouble guessing what that could be since the tool hasn't gathered sufficient information to connect yet - it asks for a DN/pass later in the script. And the log isn't much more forthcoming. I double-checked the cert with openssl; it is a valid, PEM-encoded cert. Before I head in to the code, has anyone seen this? Thanks, -j - - - - snip - - - - Relevant log details: 2018-02-08 15:15:08,625-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._getURLs:281 URLs: ['ldap://x.squaretrade.com:389'] 2018-02-08 15:15:08,626-0800 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:391 Connecting to LDAP using 'ldap://x.squaretrade.com:389' 2018-02-08 15:15:08,627-0800 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:442 Executing startTLS 2018-02-08 15:15:08,640-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:445 Perform search 2018-02-08 15:15:08,641-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:459 Exception Traceback (most recent call last): File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 451, in _connectLDAP timeout=60, File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 555, in search_st return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout) File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 546, in search_ext_s return self.result(msgid,all=1,timeout=timeout)[1] File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 458, in result resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout) File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 462, in result2 resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout) File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 469, in result3 resp_ctrl_classes=resp_ctrl_classes File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 476, in result4 ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 99, in _ldap_call result = func(*args,**kwargs) UNWILLING_TO_PERFORM: {'info': 'authentication required', 'desc': 'Server is unwilling to perform'} 2018-02-08 15:15:08,642-0800 WARNING otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:463 Cannot connect using 'ldap://x.squaretrade.com:389': {'info': 'authentication required', 'desc': 'Server is unwilling to perform'} 2018-02-08 15:15:08,643-0800 ERROR otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._customization_late:787 Cannot connect using any of available options 2018-02-08 15:15:08,644-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._customization_late:788 Exception Traceback (most recent call last): File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 782, in _customization_late insecure=insecure, File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 468, in _connectLDAP _('Cannot connect using any of available options') SoftRuntimeError: Cannot connect using any of available options From ranjithspr13 at yahoo.com Fri Feb 9 19:25:24 2018 From: ranjithspr13 at yahoo.com (ranjithspr13 at yahoo.com) Date: Fri, 9 Feb 2018 19:25:24 +0000 (UTC) Subject: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt References: <438319960.1968243.1518204324812.ref@mail.yahoo.com> Message-ID: <438319960.1968243.1518204324812@mail.yahoo.com> Hi,Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt?Using glusterfs is it possible? Then how? Thanks & RegardsRanjith Sent from Yahoo Mail on Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Feb 9 20:37:01 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 9 Feb 2018 22:37:01 +0200 Subject: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt In-Reply-To: <438319960.1968243.1518204324812@mail.yahoo.com> References: <438319960.1968243.1518204324812.ref@mail.yahoo.com> <438319960.1968243.1518204324812@mail.yahoo.com> Message-ID: On Fri, Feb 9, 2018 at 9:25 PM, ranjithspr13 at yahoo.com < ranjithspr13 at yahoo.com> wrote: > Hi, > Anyone can suggest how to setup VM Live migration (without restart vm) > while Hypervisor goes down in ovirt? > I think there are two parts to achieving this: 1. Have a script that migrates VMs off a specific host. This should be easy to write using the Python/Ruby/Java SDK, Ansible or using REST directly. 2. Having this script run as a service when a host shuts down, in the right order - well before libvirt and VDSM shut down, and would be fast enough not to be terminated by systemd. This is a bit more challenging. Who's shutting down the hypervisor? (Or perhaps it is shutdown externally, due to overheating or otherwise?) Y. > Using glusterfs is it possible? Then how? > > Thanks & Regards > Ranjith > > Sent from Yahoo Mail on Android > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccox at endlessnow.com Fri Feb 9 21:14:48 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Fri, 9 Feb 2018 15:14:48 -0600 Subject: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt In-Reply-To: <438319960.1968243.1518204324812@mail.yahoo.com> References: <438319960.1968243.1518204324812.ref@mail.yahoo.com> <438319960.1968243.1518204324812@mail.yahoo.com> Message-ID: <3160c23b-7d4d-5372-cf04-1cbd0aa54e8d@endlessnow.com> On 02/09/2018 01:25 PM, ranjithspr13 at yahoo.com wrote: > Hi, > Anyone can suggest how to setup VM Live migration (without restart vm) > while Hypervisor goes down in ovirt? > Using glusterfs is it possible? Then how? Can't speak for glusterfs specifically, but this is how things work. All nodes know about the storage domains... this is so a different node can "take over" in a live migration. Not necessarily designed for a node to go down hard (but let's say the node has problems communicating, it might get fenced and live migration begins, and of course, the usual maintenance mode would force a live migration).... for hard down scenarios you might want to look a HA for the VMs. From maozza at gmail.com Fri Feb 9 21:34:16 2018 From: maozza at gmail.com (maoz zadok) Date: Fri, 9 Feb 2018 23:34:16 +0200 Subject: [ovirt-users] Importing Libvirt Kvm Vms to oVirt Status: Released in oVirt 4.2 using ssh - Failed to communicate with the external provider In-Reply-To: References: Message-ID: Renout, Thank you! now it works :-) it makes sense. On Fri, Feb 9, 2018 at 12:03 PM, Renout Gerrits wrote: > Hi Maoz, > > You should not be using the engine and not the root user for the ssh keys. > The actions are delegated to a host and the vdsm user. So you should set-up > ssh keys for the vdsm user on one or all of the hosts (remember to select > this host as proxy host in the gui). Probably the documentation should be > updated to make this more clear. > > 1. Make the keygen for vdsm user: > > # sudo -u vdsm ssh-keygen > > 2.Do the first login to confirm the fingerprints using "yes": > > # sudo -u vdsm ssh root at xxx.xxx.xxx.xxx > > 3. Then copy the key to the KVm host running the vm: > > # sudo -u vdsm ssh-copy-id root at xxx.xxx.xxx.xxx > > 4. Now verify is vdsm can login without password or not: > > # sudo -u vdsm ssh root at xxx.xxx.xxx.xxx > > > On Thu, Feb 8, 2018 at 3:12 PM, Petr Kotas wrote: > >> You can generate one :). There are different guides for different >> platforms. >> >> The link I sent is the good start on where to put the keys and how to set >> it up. >> >> Petr >> >> On Thu, Feb 8, 2018 at 3:09 PM, maoz zadok wrote: >> >>> Using the command line on the engine machine (as root) works fine. I >>> don't use ssh key from the agent GUI but the authentication section (with >>> root user and password), >>> I think that it's a bug, I manage to migrate with TCP but I just want to >>> let you know. >>> >>> is it possible to use ssh-key from the agent GUI? how can I get the key? >>> >>> On Thu, Feb 8, 2018 at 2:51 PM, Petr Kotas wrote: >>> >>>> Hi Maoz, >>>> >>>> it looks like cannot connect due to wrong setup of ssh keys. Which >>>> linux are you using? >>>> The guide for setting the ssh connection to libvirt is here: >>>> https://wiki.libvirt.org/page/SSHSetup >>>> >>>> May it helps? >>>> >>>> Petr >>>> >>>> On Wed, Feb 7, 2018 at 10:53 PM, maoz zadok wrote: >>>> >>>>> Hello there, >>>>> >>>>> I'm following https://www.ovirt.org/develop/ >>>>> release-management/features/virt/KvmToOvirt/ guide in order to import >>>>> VMS from Libvirt to oVirt using ssh. >>>>> URL: "qemu+ssh://host1.example.org/system" >>>>> >>>>> and get the following error: >>>>> Failed to communicate with the external provider, see log for >>>>> additional details. >>>>> >>>>> >>>>> *oVirt agent log:* >>>>> >>>>> *- Failed to retrieve VMs information from external server >>>>> qemu+ssh://XXX.XXX.XXX.XXX/system* >>>>> *- VDSM XXX command GetVmsNamesFromExternalProviderVDS failed: Cannot >>>>> recv data: Host key verification failed.: Connection reset by peer* >>>>> >>>>> >>>>> >>>>> *remote host sshd DEBUG log:* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: Connection from XXX.XXX.XXX.147 >>>>> port 48148 on XXX.XXX.XXX.123 port 22* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Client protocol version >>>>> 2.0; client software version OpenSSH_7.4* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: match: OpenSSH_7.4 pat >>>>> OpenSSH* compat 0x04000000* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Local version string >>>>> SSH-2.0-OpenSSH_7.4* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Enabling compatibility mode >>>>> for protocol 2.0* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SELinux support disabled >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: permanently_set_uid: 74/74 >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: list_hostkey_types: >>>>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT sent >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_KEXINIT received >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: algorithm: >>>>> curve25519-sha256 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: host key algorithm: >>>>> ecdsa-sha2-nistp256 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: client->server cipher: >>>>> chacha20-poly1305 at openssh.com MAC: >>>>> compression: none [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: server->client cipher: >>>>> chacha20-poly1305 at openssh.com MAC: >>>>> compression: none [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 >>>>> need=64 dh_need=64 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: kex: curve25519-sha256 >>>>> need=64 dh_need=64 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting >>>>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: rekey after 134217728 >>>>> blocks [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: SSH2_MSG_NEWKEYS sent >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: expecting SSH2_MSG_NEWKEYS >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: Connection closed by >>>>> XXX.XXX.XXX.147 port 48148 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: do_cleanup* >>>>> *Feb 7 16:38:29 XXX sshd[110005]: debug1: Killing privsep child >>>>> 110006* >>>>> *Feb 7 16:38:29 XXX sshd[109922]: debug1: Forked child 110007.* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Set >>>>> /proc/self/oom_score_adj to 0* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: rexec start in 5 out 5 >>>>> newsock 5 pipe 7 sock 8* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: inetd sockets after >>>>> dupping: 3, 3* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: Connection from XXX.XXX.XXX.147 >>>>> port 48150 on XXX.XXX.XXX.123 port 22* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Client protocol version >>>>> 2.0; client software version OpenSSH_7.4* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: match: OpenSSH_7.4 pat >>>>> OpenSSH* compat 0x04000000* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Local version string >>>>> SSH-2.0-OpenSSH_7.4* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Enabling compatibility mode >>>>> for protocol 2.0* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SELinux support disabled >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: permanently_set_uid: 74/74 >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: list_hostkey_types: >>>>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT sent >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_KEXINIT received >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: algorithm: >>>>> curve25519-sha256 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: host key algorithm: >>>>> ecdsa-sha2-nistp256 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: client->server cipher: >>>>> chacha20-poly1305 at openssh.com MAC: >>>>> compression: none [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: server->client cipher: >>>>> chacha20-poly1305 at openssh.com MAC: >>>>> compression: none [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 >>>>> need=64 dh_need=64 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: kex: curve25519-sha256 >>>>> need=64 dh_need=64 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting >>>>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: rekey after 134217728 >>>>> blocks [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: SSH2_MSG_NEWKEYS sent >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: expecting SSH2_MSG_NEWKEYS >>>>> [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: Connection closed by >>>>> XXX.XXX.XXX.147 port 48150 [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup [preauth]* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: do_cleanup* >>>>> *Feb 7 16:38:29 XXX sshd[110007]: debug1: Killing privsep child >>>>> 110008* >>>>> *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110009.* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Set >>>>> /proc/self/oom_score_adj to 0* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: rexec start in 5 out 5 >>>>> newsock 5 pipe 7 sock 8* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: inetd sockets after >>>>> dupping: 3, 3* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: Connection from XXX.XXX.XXX.147 >>>>> port 48152 on XXX.XXX.XXX.123 port 22* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Client protocol version >>>>> 2.0; client software version OpenSSH_7.4* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: match: OpenSSH_7.4 pat >>>>> OpenSSH* compat 0x04000000* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Local version string >>>>> SSH-2.0-OpenSSH_7.4* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Enabling compatibility mode >>>>> for protocol 2.0* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SELinux support disabled >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: permanently_set_uid: 74/74 >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: list_hostkey_types: >>>>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT sent >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_KEXINIT received >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: algorithm: >>>>> curve25519-sha256 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: host key algorithm: >>>>> ecdsa-sha2-nistp256 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: client->server cipher: >>>>> chacha20-poly1305 at openssh.com MAC: >>>>> compression: none [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: server->client cipher: >>>>> chacha20-poly1305 at openssh.com MAC: >>>>> compression: none [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 >>>>> need=64 dh_need=64 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: kex: curve25519-sha256 >>>>> need=64 dh_need=64 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting >>>>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: rekey after 134217728 >>>>> blocks [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: SSH2_MSG_NEWKEYS sent >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: expecting SSH2_MSG_NEWKEYS >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: Connection closed by >>>>> XXX.XXX.XXX.147 port 48152 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: do_cleanup* >>>>> *Feb 7 16:38:30 XXX sshd[110009]: debug1: Killing privsep child >>>>> 110010* >>>>> *Feb 7 16:38:30 XXX sshd[109922]: debug1: Forked child 110011.* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Set >>>>> /proc/self/oom_score_adj to 0* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: rexec start in 5 out 5 >>>>> newsock 5 pipe 7 sock 8* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: inetd sockets after >>>>> dupping: 3, 3* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: Connection from XXX.XXX.XXX.147 >>>>> port 48154 on XXX.XXX.XXX.123 port 22* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Client protocol version >>>>> 2.0; client software version OpenSSH_7.4* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: match: OpenSSH_7.4 pat >>>>> OpenSSH* compat 0x04000000* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Local version string >>>>> SSH-2.0-OpenSSH_7.4* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: Enabling compatibility mode >>>>> for protocol 2.0* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SELinux support disabled >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: permanently_set_uid: 74/74 >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: list_hostkey_types: >>>>> ssh-rsa,rsa-sha2-512,rsa-sha2-256,ecdsa-sha2-nistp256,ssh-ed25519 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT sent >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_KEXINIT received >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: algorithm: >>>>> curve25519-sha256 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: host key algorithm: >>>>> ecdsa-sha2-nistp256 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: client->server cipher: >>>>> chacha20-poly1305 at openssh.com MAC: >>>>> compression: none [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: server->client cipher: >>>>> chacha20-poly1305 at openssh.com MAC: >>>>> compression: none [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 >>>>> need=64 dh_need=64 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: kex: curve25519-sha256 >>>>> need=64 dh_need=64 [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting >>>>> SSH2_MSG_KEX_ECDH_INIT [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: rekey after 134217728 >>>>> blocks [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: SSH2_MSG_NEWKEYS sent >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: debug1: expecting SSH2_MSG_NEWKEYS >>>>> [preauth]* >>>>> *Feb 7 16:38:30 XXX sshd[110011]: Connection closed by >>>>> XXX.XXX.XXX.147 port 48154 [preauth]* >>>>> >>>>> >>>>> Thank you! >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reznikov_aa at soskol.com Fri Feb 9 21:48:52 2018 From: reznikov_aa at soskol.com (reznikov_aa at soskol.com) Date: Sat, 10 Feb 2018 00:48:52 +0300 Subject: [ovirt-users] =?utf-8?q?ovirt_4=2E1_unable_deploy_HostedEngine_on?= =?utf-8?q?_next_host_Configuration_value_not_found=3A_file=3D/etc/=2E=2E?= =?utf-8?q?=2E/hosted-engine=2Econf?= In-Reply-To: References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> Message-ID: <7004f464280bca707b1b0912dcf07988@soskol.com> Simone Tiraboschi ????? 2018-02-09 15:17: > It shouldn't happen. > I suspect that something went wrong creating the configuration volume > on the shared storage at the end of the deployment. > > Alexei, can both of you attach you hosted-engine-setup logs? > Can you please check what happens on > ? hosted-engine --get-shared-config gateway > > Thanks > Simone, my ovirt cluster upgrade from 3.4... and i have too old logs. I'm confused by the execution of the hosted-engine --get-shared-config gateway ... I get the output "gateway: 10.245.183.1, type: he_conf", but my current hosted-engine.conf is overwritten by the other hosted-engine.conf. old file: fqdn = eng.lan vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749 vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228 storage = ssd.lan:/ovirt service_start_time = 0 host_id = 3 console = vnc domainType = nfs3 sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763 connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4 ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem ca_subject = "C = EN, L = Test, O = Test, CN = Test" vdsm_use_ssl = true gateway = 10.245.183.1 bridge = ovirtmgmt metadata_volume_UUID = metadata_image_UUID = lockspace_volume_UUID = lockspace_image_UUID = The following are used only for iSCSI storage iqn = portal = user = password = port = conf_volume_UUID = a20d9700-1b9a-41d8-bb4b-f2b7c168104f conf_image_UUID = b5f353f5-9357-4aad-b1a3-751d411e6278 conf = /var/run/ovirt-hosted-engine-ha/vm.conf vm_disk_vol_id = cd12a59e-7d84-4b4e-98c7-4c68e83ecd7b spUUID = 00000000-0000-0000-0000-000000000000 new rewrite file fqdn = eng.lan vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749 vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228 storage = ssd.lan:/ovirt conf = /etc/ovirt-hosted-engine/vm.conf service_start_time = 0 host_id = 3 console = vnc domainType = nfs3 spUUID = 036f83d7-39f7-48fd-a73a-3c9ffb3dbe6a sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763 connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4 ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem ca_subject = "C = EN, L = Test, O = Test, CN = Test" vdsm_use_ssl = true gateway = 10.245.183.1 bridge = ovirtmgmt metadata_volume_UUID = metadata_image_UUID = lockspace_volume_UUID = lockspace_image_UUID = The following are used only for iSCSI storage iqn = portal = user = password = port = And this in all hosts in cluster! It seems to me that these are some remnants of versions 3.4, 3.5 ... From ranjithspr13 at yahoo.com Sat Feb 10 07:30:46 2018 From: ranjithspr13 at yahoo.com (Ranjith P) Date: Sat, 10 Feb 2018 07:30:46 +0000 (UTC) Subject: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt In-Reply-To: References: <438319960.1968243.1518204324812.ref@mail.yahoo.com> <438319960.1968243.1518204324812@mail.yahoo.com> Message-ID: <927677519.2261450.1518247846805@mail.yahoo.com> Hi, >>Who's shutting down the hypervisor? (Or perhaps it is shutdown externally, due to overheating or otherwise?) We need a continuous availability of VM's in our production setup. If the hypervisor goes down due to any hardware failure or work load then VM's above hypervisor will reboot and started on available hypervisors. This is normally happening but it disrupting VM's. Can you suggest a solution in this case? Can we achieve this challenge using glusterfs? Thanks & RegardsRanjith Sent from Yahoo Mail on Android On Sat, Feb 10, 2018 at 2:07 AM, Yaniv Kaul wrote: On Fri, Feb 9, 2018 at 9:25 PM, ranjithspr13 at yahoo.com wrote: Hi,Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt? I think there are two parts to achieving this:1. Have a script that migrates VMs off a specific host. This should be easy to write using the Python/Ruby/Java SDK, Ansible or using REST directly.2. Having this script run as a service when a host shuts down, in the right order - well before libvirt and VDSM shut down, and would be fast enough not to be terminated by systemd.This is a bit more challenging. Who's shutting down the hypervisor? (Or perhaps it is shutdown externally, due to overheating or otherwise?)Y.? Using glusterfs is it possible? Then how? Thanks & RegardsRanjith Sent from Yahoo Mail on Android ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From xrs444 at xrs444.net Sat Feb 10 15:47:45 2018 From: xrs444 at xrs444.net (Thomas Letherby) Date: Sat, 10 Feb 2018 15:47:45 +0000 Subject: [ovirt-users] Maximum time node can be offline. In-Reply-To: References: <1518084724645.71365@leedsbeckett.ac.uk> Message-ID: That's exactly what I needed to know, thanks all. I'll schedule a script for the nodes to reboot and patch once every week or two and then I can let it run without me needing to worry about it. Thomas On Fri, Feb 9, 2018, 2:26 AM Martin Sivak wrote: > Hi, > > the hosts are almost stateless and we set up most of what is needed > during activation. Hosted engine has some configuration stored > locally, but that is just the path to the storage domain. > > I think you should be fine unless you change the network topology > significantly. I would also install security updates once in while. > > We can even shut down the hosts for you when you configure two cluster > scheduling properties: EnableAutomaticPM and HostsInReserve. > HostsInReserve should be at least 1 though. It behaves like this, as > long as the reserve host is empty, we shut down all the other empty > hosts. And we boot another host once a VM does not fit on other used > hosts and is places on the running reserve host. That would save you > the power of just one host, but it would still be highly available (if > hosted engine and storage allows that too). > > Bear in mind that single host cluster is not highly available at all. > > Best regards > > Martin Sivak > > On Fri, Feb 9, 2018 at 8:25 AM, Gianluca Cecchi > wrote: > > On Fri, Feb 9, 2018 at 2:30 AM, Thomas Letherby > wrote: > >> > >> Thanks, that answers my follow up question! :) > >> > >> My concern is that I could have a host off-line for a month say, is that > >> going to cause any issues? > >> > >> Thanks, > >> > >> Thomas > >> > > > > I think that if in the mean time you don't make any configuration changes > > and you don't update anything, there is no reason to have problems. > > In case of changes done, it could depend on what they are: are you > thinking > > about any particular scenario? > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Sat Feb 10 18:41:48 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Sat, 10 Feb 2018 19:41:48 +0100 Subject: [ovirt-users] Info about windows guest performance In-Reply-To: References: <0F23B85A-58FB-40E1-B10F-F12E78ACF806@redhat.com> Message-ID: On Fri, Feb 9, 2018 at 4:32 PM, Gianluca Cecchi wrote: > > > If I edit the VM, in general settings I see "Other OS" as operating system. > In General subtab after selecting the VM in "Virtual Machines" tab I > again see "Other OS" in "Operating System" and the field "Origin" filled > with the value "VMware" > > During virt-v2v it seems it was recognized as Windows 2008 though... > > libguestfs: trace: v2v: hivex_value_utf8 = "Windows Server 2008 R2 > Enterprise" > libguestfs: trace: v2v: hivex_value_key 11809408 > > I can send all the log if it can help. > Thanks, > Gianluca > So it seems it has been a problem with virt-v2v conversion, because if I shutdown the VM and set it to Windows 2008 R2 x86_64 and optimized for server and I run it, I get this flag for the cpu: -cpu Westmere,vmx=on,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff BTW: what are the other flags for: hv_spinlocks=0x1fff hv_relaxed hv_vapic ? Complete command is: /usr/libexec/qemu-kvm -name guest=testmig,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-15-testmig/master-key.aes -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off -cpu Westmere,vmx=on,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m size=4194304k,slots=16,maxmem=16777216k -realtime mlock=off -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 -numa node,nodeid=0,cpus=0-1,mem=4096 -uuid XXX -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-4.1708.el7.centos,serial=XXX,uuid=YYY -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-15-testmig/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2018-02-10T18:32:22,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x4 -drive if=none,id=drive-ide0-1-0,readonly=on -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/2de93ee3-7d6e-4a10-88c4-abc7a11fb687/a9f4e35b-4aa0-45e8-b775-1a046d1851aa,format=qcow2,if=none,id=drive-scsi0-0-0-1,serial=2de93ee3-7d6e-4a10-88c4-abc7a11fb687,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive-scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1 -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3-9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/f821da0a-cec7-457c-88a4-f83f33404e65/0d0c4244-f184-4eaa-b5bf-8dc65c7069bb,format=raw,if=none,id=drive-scsi0-0-0-0,serial=f821da0a-cec7-457c-88a4-f83f33404e65,cache=none,werror=stop,rerror=stop,aio=native -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0 -netdev tap,fd=30,id=hostnet0 -device e1000,netdev=hostnet0,id=net0,mac=00:50:56:9d:c9:29,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/421d6f1b-58e3-54a4-802f-fb52f7831369.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice tls-port=5900,addr=10.4.192.32,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -msg timestamp=on Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.am.stack at gmail.com Sun Feb 11 00:43:09 2018 From: i.am.stack at gmail.com (~Stack~) Date: Sat, 10 Feb 2018 18:43:09 -0600 Subject: [ovirt-users] Issue with 4.2.1 RC and SSL In-Reply-To: References: Message-ID: <4179b0be-6579-d86e-dc2e-e64c5e3cb57b@gmail.com> On 02/08/2018 06:42 AM, Petr Kotas wrote: > Hi Stack, Greetings Petr > have you tried it on other linux distributions? Scientific is not > officially supported. No, but SL isn't really any different than CentOS. If anything, we've found it adheres closer to RH than CentOS does. > My guess based on your log is there are somewhere missing certificates, > maybe different path?. > You can check the paths by the documentation: > https://www.ovirt.org/develop/release-management/features/infra/pki/#vdsm > > Hope this helps. Thanks for the suggestion. It took a while but we dug into it and I *think* the problem was because I may have over-written the wrong cert file in one of my steps. I'm only about 80% certain of that, but it seems to match what we found when we were digging through the log files. We decided to just start from scratch and my coworker watched and confirmed every step. It works! No problems at all this time. Further evidence that I goofed _something_ up the first time. Thank you for the suggestion! ~Stack~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From wstewart3 at gmail.com Sun Feb 11 01:00:18 2018 From: wstewart3 at gmail.com (Wesley Stewart) Date: Sat, 10 Feb 2018 20:00:18 -0500 Subject: [ovirt-users] Using network assigned to VM on CentOS host? Message-ID: This might be a stupid question. But I am testing out a 10Gb network directly connected to my Freenas box using a Cat6 crossover cable. I setup the connection (on device eno4) and called the network "Crossover" in oVirt. I dont have DHCP on this, but I can easy assign VMs a NIC on the "Crossover" network, assign them an ip address (10.10.10.x) and everything works fine. But I was curious about doing this for the CentOS host as well. I want to test out hosting VM's on the NFS share over the 10Gb network but I wasn't quite sure how to do this without breaking other connections and I did not want to do anything incorrectly. I appreciate your feedback! I apologize if this is a stupid question. Running oVirt 4.1.8 on CentOS 7.4 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ishaby at redhat.com Sun Feb 11 07:55:47 2018 From: ishaby at redhat.com (Idan Shaby) Date: Sun, 11 Feb 2018 09:55:47 +0200 Subject: [ovirt-users] effectiveness of "discard=unmap" In-Reply-To: <7daec1b4-dc4a-0527-7c2e-00bf21455ae0@meduniwien.ac.at> References: <7daec1b4-dc4a-0527-7c2e-00bf21455ae0@meduniwien.ac.at> Message-ID: Hi Matthias, When the guest executes a discard call of any variation (fstrim, blkdiscard, etc.), the underlying thinly provisioned LUN is the one that changes - it returns the unused blocks to the storage array and gets smaller. Therefore, no change is visible to the guest OS. If you want to check what has changed, go to the storage array and check what's the size of the underlying thinly provisioned LUN before and after the discard call. The answer for your question and some more information can be found in the feature page [1] (needs a bit of an update, but most of it is still relevant). If you got any further questions, please don't hesitate to ask. Regards, Idan [1] Pass discard from guest to underlying storage - https://www.ovirt.org/develop/release-management/features/storage/pass-discard-from-guest-to-underlying-storage/ On Thu, Feb 8, 2018 at 2:08 PM, Matthias Leopold < matthias.leopold at meduniwien.ac.at> wrote: > Hi, > > i'm sorry to bother you again with my ignorance of the DISCARD feature for > block devices in general. > > after finding several ways to enable "discard=unmap" for oVirt disks (via > standard GUI option for iSCSI disks or via "diskunmap" custom property for > Cinder disks) i wanted to check in the guest for the effectiveness of this > feature. to my surprise i couldn't find a difference between Linux guests > with and without "discard=unmap" enabled in the VM. "lsblk -D" reports the > same in both cases and also fstrim/blkdiscard commands appear to work with > no difference. Why is this? Do i have to look at the underlying storage to > find out what really happens? Shouldn't this be visible in the guest OS? > > thx > matthias > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From usual.man at gmail.com Thu Feb 8 14:47:40 2018 From: usual.man at gmail.com (George Sitov) Date: Thu, 8 Feb 2018 16:47:40 +0200 Subject: [ovirt-users] ovn problem - Failed to communicate with the external provider, see log for additional details. In-Reply-To: References: Message-ID: Thank you! It was certifucate problem. I returned it to pki and all work. 8 ????. 2018 ?. 4:44 ?? ???????????? "Marcin Mirecki" ???????: Hello George, Probably your engine and provider certs do not match. The engine pki should be in: /etc/pki/ovirt-engine/certs/ The provider keys are defined in the SSL section of the config file (/etc/ovirt-provider-ovn/conf.d/...): [SSL] https-enabled=true ssl-key-file=... ssl-cert-file=... ssl-cacert-file=... You can compare the keys/certs using openssl. Was the provider created using egine-setup? For testing purposes you can change the "https-enabled" to false and try connecting using http. Thanks, Marcin On Thu, Feb 8, 2018 at 12:58 PM, Ilya Fedotov wrote: > Hello, Georgy > > Maybe, the problem have the different domain name and name your node > name(local domain), and certificate note valid. > > > > with br, Ilya > > 2018-02-05 22:36 GMT+03:00 George Sitov : > >> Hello! >> >> I have a problem wiith configure external provider. >> >> Edit config file - ovirt-provider-ovn.conf, set ssl parameters. >> systemctl start ovirt-provider-ovn start without problem. >> In external proveder in web gui i set: >> Provider URL: https://ovirt.mydomain.com:9696 >> Username: admin at internal >> Authentication URL: https://ovirt.mydomain.com:35357/v2.0/ >> But after i press test button i see error - Failed to communicate with >> the external provider, see log for additional details. >> >> /var/log/ovirt-engine/engine.log: >> 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.pro >> vider.network.openstack.BaseNetworkProviderProxy] (default task-29) >> [69fa312e-6e2e-4925-b081-385beba18a6a] Bad Gateway (OpenStack response >> error code: 502) >> 2018-02-05 21:33:55,517+02 ERROR [org.ovirt.engine.core.bll.pro >> vider.TestProviderConnectivityCommand] (default task-29) >> [69fa312e-6e2e-4925-b081-385beba18a6a] Command ' >> org.ovirt.engine.core.bll.provider.TestProviderConnectivityCommand' >> failed: EngineException: (Failed with error PROVIDER_FAILURE and code 5050) >> >> In /var/log/ovirt-provider-ovn.log: >> >> 2018-02-05 21:33:55,510 Starting new HTTPS connection (1): >> ovirt.astrecdata.com >> 2018-02-05 21:33:55,516 [SSL: CERTIFICATE_VERIFY_FAILED] certificate >> verify failed (_ssl.c:579) >> Traceback (most recent call last): >> File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line >> 126, in _handle_request >> method, path_parts, content) >> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", >> line 176, in handle_request >> return self.call_response_handler(handler, content, parameters) >> File "/usr/share/ovirt-provider-ovn/handlers/keystone.py", line 33, in >> call_response_handler >> return response_handler(content, parameters) >> File "/usr/share/ovirt-provider-ovn/handlers/keystone_responses.py", >> line 60, in post_tokens >> user_password=user_password) >> File "/usr/share/ovirt-provider-ovn/auth/plugin_facade.py", line 26, >> in create_token >> return auth.core.plugin.create_token(user_at_domain, user_password) >> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/plugin.py", >> line 48, in create_token >> timeout=self._timeout()) >> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line >> 62, in create_token >> username, password, engine_url, ca_file, timeout) >> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line >> 53, in wrapper >> response = func(*args, **kwargs) >> File "/usr/share/ovirt-provider-ovn/auth/plugins/ovirt/sso.py", line >> 46, in wrapper >> raise BadGateway(e) >> BadGateway: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed >> (_ssl.c:579) >> >> Whan i do wrong ? >> Please help. >> >> ---- >> With best regards Georgii. >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Sun Feb 11 08:26:46 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 11 Feb 2018 10:26:46 +0200 Subject: [ovirt-users] Issue with 4.2.1 RC and SSL In-Reply-To: <4179b0be-6579-d86e-dc2e-e64c5e3cb57b@gmail.com> References: <4179b0be-6579-d86e-dc2e-e64c5e3cb57b@gmail.com> Message-ID: On Sun, Feb 11, 2018 at 2:43 AM, ~Stack~ wrote: > On 02/08/2018 06:42 AM, Petr Kotas wrote: > > Hi Stack, > > Greetings Petr > > > have you tried it on other linux distributions? Scientific is not > > officially supported. > > No, but SL isn't really any different than CentOS. If anything, we've > found it adheres closer to RH than CentOS does. > > > My guess based on your log is there are somewhere missing certificates, > > maybe different path?. > > You can check the paths by the documentation: > > https://www.ovirt.org/develop/release-management/features/ > infra/pki/#vdsm > > > > Hope this helps. > > > Thanks for the suggestion. It took a while but we dug into it and I > *think* the problem was because I may have over-written the wrong cert > file in one of my steps. I'm only about 80% certain of that, but it > seems to match what we found when we were digging through the log files. > > We decided to just start from scratch and my coworker watched and > confirmed every step. It works! No problems at all this time. Further > evidence that I goofed _something_ up the first time. > We should really have an Ansible role that performs the conversion to self-signed certificates. That would make the conversion easier and safer. Y. > > Thank you for the suggestion! > ~Stack~ > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Sun Feb 11 08:28:15 2018 From: mburman at redhat.com (Michael Burman) Date: Sun, 11 Feb 2018 10:28:15 +0200 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <20180209112146.2F2D0E12B8@smtp01.mail.de> References: <20180209112146.2F2D0E12B8@smtp01.mail.de> Message-ID: Jiri, can you maybe assist the user with his upgrade problem? I'm not familiar with the error and not sure how could it be that he is already has a cluster version 4.2 prior upgrading the engine to 4.2..unless i didn't understood it completely.. Thanks, On Fri, Feb 9, 2018 at 1:21 PM, wrote: > Hi, > Could someone explain me at least what "Cluster PROD is at version 4.2 > which is not supported by this upgrade flow. Please fix it before > upgrading." means ? As far as I know 4.2 is the most recent branch > available, isn't it ? > Regards > > > > Le 08-Feb-2018 09:59:32 +0100, mburman at redhat.com a ?crit: > > Not sure i understand from which version you trying to upgrade and what is > the exact upgrade flow, if i got it correctly, it is seems that you > upgraded the hosts to 4.2, but engine still 4.1? > What exactly the upgrade steps, please explain the flow., what have you > done after upgrading the hosts? to what version? > > Cheers) > > On Wed, Feb 7, 2018 at 3:00 PM, wrote: > >> Hi, >> Thanks a lot for your answer. >> >> I applied some updates at node level, but I forgot to upgrade the engine ! >> >> When I try to do so I get a strange error : "Cluster PROD is at version >> 4.2 which is not supported by this upgrade flow. Please fix it before >> upgrading." >> >> Here are the installed packets on my nodes : >> python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 >> ovirt-imageio-common-1.2.0-1.el7.centos.noarch >> ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch >> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >> ovirt-release42-4.2.0-1.el7.centos.noarch >> ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch >> ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64 >> ovirt-host-4.2.0-1.el7.centos.x86_64 >> ovirt-host-deploy-1.7.0-1.el7.centos.noarch >> ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch >> ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch >> ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch >> ovirt-vmconsole-host-1.0.4-1.el7.noarch >> cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch >> ovirt-vmconsole-1.0.4-1.el7.noarch >> >> What I am supposed to do ? I see no newer packages available. >> >> Regards >> >> >> >> Le 07-Feb-2018 13:23:43 +0100, mburman at redhat.com a ?crit: >> >> Hi >> >> This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - >> https://bugzilla.redhat.com/show_bug.cgi?id=1528906 >> >> The no default route bug was fixed in - https://bugzilla.redhat.com/ >> show_bug.cgi?id=1477589 >> >> Thanks, >> >> On Wed, Feb 7, 2018 at 1:15 PM, wrote: >> >>> >>> Hi, >>> I am experiencing a new problem : when I try to modify something in the >>> network setup on the second node (added to the cluster after installing the >>> engine on the other one) using the Engine GUI, I get the following error >>> when validating : >>> >>> must match "^\b((25[0-5]|2[0-4]\d|[01]\d\ >>> d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" >>> Attribut : ipConfiguration.iPv4Addresses[0].gateway >>> >>> Moreover, on the general status of ther server, I have a "Host has no >>> default route" alert. >>> >>> The ovirtmgmt network has a defined gateway of course, and the storage >>> network has none because it is not required. Both server have the same >>> setup, with different addresses of course :-) >>> >>> I have not been able to find anything useful in the logs. >>> >>> Is this a bug or am I doing something wrong ? >>> >>> Regards >>> >>> ------------------------------ >>> FreeMail powered by mail.fr >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> >> Michael Burman >> >> Senior Quality engineer - rhv network - redhat israel >> >> Red Hat >> >> >> >> mburman at redhat.com M: 0545355725 IM: mburman >> >> >> >> ------------------------------ >> FreeMail powered by mail.fr >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Sun Feb 11 08:33:58 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 11 Feb 2018 10:33:58 +0200 Subject: [ovirt-users] Info about windows guest performance In-Reply-To: References: <0F23B85A-58FB-40E1-B10F-F12E78ACF806@redhat.com> Message-ID: On Sat, Feb 10, 2018 at 8:41 PM, Gianluca Cecchi wrote: > On Fri, Feb 9, 2018 at 4:32 PM, Gianluca Cecchi > wrote: > >> >> >> If I edit the VM, in general settings I see "Other OS" as operating >> system. >> In General subtab after selecting the VM in "Virtual Machines" tab I >> again see "Other OS" in "Operating System" and the field "Origin" filled >> with the value "VMware" >> >> During virt-v2v it seems it was recognized as Windows 2008 though... >> >> libguestfs: trace: v2v: hivex_value_utf8 = "Windows Server 2008 R2 >> Enterprise" >> libguestfs: trace: v2v: hivex_value_key 11809408 >> >> I can send all the log if it can help. >> Thanks, >> Gianluca >> > > > So it seems it has been a problem with virt-v2v conversion, because if I > shutdown the VM and set it to Windows 2008 R2 x86_64 and optimized for > server and I run it, I get this flag for the cpu: > A new virt-v2v was just released, worth testing it. It has some nice features, and perhaps fixes the above too. For example: Virt-v2v now installs Windows 10 / Windows Server 2016 virtio block drivers correctly (Pavel Butsykin, Kun Wei). Virt-v2v now installs virtio-rng, balloon and pvpanic drivers, and correctly sets this in the target hypervisor metadata for hypervisors which support that (Tom?? Golembiovsk?). Virt-v2v now installs both legacy and modern virtio keys in the Windows registry (Ladi Prosek). > > -cpu Westmere,vmx=on,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff > > BTW: what are the other flags for: > > > hv_spinlocks=0x1fff > hv_relaxed > hv_vapic > ? > These are the enlightenment that allow Windows guests to run faster (hv = hyper-v). See[1] Y. [1] http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html > Complete command is: > > /usr/libexec/qemu-kvm > -name guest=testmig,debug-threads=on > -S > -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/ > qemu/domain-15-testmig/master-key.aes > -machine pc-i440fx-rhel7.3.0,accel=kvm,usb=off,dump-guest-core=off > -cpu Westmere,vmx=on,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff > -m size=4194304k,slots=16,maxmem=16777216k > -realtime mlock=off > -smp 2,maxcpus=16,sockets=16,cores=1,threads=1 > -numa node,nodeid=0,cpus=0-1,mem=4096 > -uuid XXX > -smbios type=1,manufacturer=oVirt,product=oVirt Node,version=7-4.1708.el7. > centos,serial=XXX,uuid=YYY > -no-user-config > -nodefaults > -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain- > 15-testmig/monitor.sock,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control > -rtc base=2018-02-10T18:32:22,driftfix=slew > -global kvm-pit.lost_tick_policy=delay > -no-hpet > -no-shutdown > -boot strict=on > -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 > -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x5 > -device virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci. > 0,addr=0x4 > -drive if=none,id=drive-ide0-1-0,readonly=on > -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 > -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3- > 9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/ > 2de93ee3-7d6e-4a10-88c4-abc7a11fb687/a9f4e35b-4aa0- > 45e8-b775-1a046d1851aa,format=qcow2,if=none,id=drive-scsi0- > 0-0-1,serial=2de93ee3-7d6e-4a10-88c4-abc7a11fb687,cache= > none,werror=stop,rerror=stop,aio=native > -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=1,drive=drive- > scsi0-0-0-1,id=scsi0-0-0-1,bootindex=1 > -drive file=/rhev/data-center/ef17cad6-7724-4cd8-96e3- > 9af6e529db51/fa33df49-b09d-4f86-9719-ede649542c21/images/ > f821da0a-cec7-457c-88a4-f83f33404e65/0d0c4244-f184- > 4eaa-b5bf-8dc65c7069bb,format=raw,if=none,id=drive-scsi0-0- > 0-0,serial=f821da0a-cec7-457c-88a4-f83f33404e65,cache=none, > werror=stop,rerror=stop,aio=native > -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive- > scsi0-0-0-0,id=scsi0-0-0-0 > -netdev tap,fd=30,id=hostnet0 > -device e1000,netdev=hostnet0,id=net0,mac=00:50:56:9d:c9:29,bus=pci. > 0,addr=0x3 > -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/ > 421d6f1b-58e3-54a4-802f-fb52f7831369.com.redhat.rhevm.vdsm,server,nowait > -device virtserialport,bus=virtio-serial0.0,nr=1,chardev= > charchannel0,id=channel0,name=com.redhat.rhevm.vdsm > -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/ > 421d6f1b-58e3-54a4-802f-fb52f7831369.org.qemu.guest_agent.0,server,nowait > -device virtserialport,bus=virtio-serial0.0,nr=2,chardev= > charchannel1,id=channel1,name=org.qemu.guest_agent.0 > -chardev spicevmc,id=charchannel2,name=vdagent > -device virtserialport,bus=virtio-serial0.0,nr=3,chardev= > charchannel2,id=channel2,name=com.redhat.spice.0 > -spice tls-port=5900,addr=10.4.192.32,x509-dir=/etc/pki/vdsm/ > libvirt-spice,tls-channel=default,tls-channel=main,tls- > channel=display,tls-channel=inputs,tls-channel=cursor,tls- > channel=playback,tls-channel=record,tls-channel=smartcard, > tls-channel=usbredir,seamless-migration=on > -device qxl-vga,id=video0,ram_size=67108864,vram_size=8388608, > vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 > -msg timestamp=on > > Thanks, > Gianluca > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Sun Feb 11 08:35:29 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 11 Feb 2018 10:35:29 +0200 Subject: [ovirt-users] Maximum time node can be offline. In-Reply-To: References: <1518084724645.71365@leedsbeckett.ac.uk> Message-ID: On Sat, Feb 10, 2018 at 5:47 PM, Thomas Letherby wrote: > That's exactly what I needed to know, thanks all. > > I'll schedule a script for the nodes to reboot and patch once every week > or two and then I can let it run without me needing to worry about it. > An example on how to check for updates is available @ https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upgrade_host.py Y. > Thomas > > On Fri, Feb 9, 2018, 2:26 AM Martin Sivak wrote: > >> Hi, >> >> the hosts are almost stateless and we set up most of what is needed >> during activation. Hosted engine has some configuration stored >> locally, but that is just the path to the storage domain. >> >> I think you should be fine unless you change the network topology >> significantly. I would also install security updates once in while. >> >> We can even shut down the hosts for you when you configure two cluster >> scheduling properties: EnableAutomaticPM and HostsInReserve. >> HostsInReserve should be at least 1 though. It behaves like this, as >> long as the reserve host is empty, we shut down all the other empty >> hosts. And we boot another host once a VM does not fit on other used >> hosts and is places on the running reserve host. That would save you >> the power of just one host, but it would still be highly available (if >> hosted engine and storage allows that too). >> >> Bear in mind that single host cluster is not highly available at all. >> >> Best regards >> >> Martin Sivak >> >> On Fri, Feb 9, 2018 at 8:25 AM, Gianluca Cecchi >> wrote: >> > On Fri, Feb 9, 2018 at 2:30 AM, Thomas Letherby >> wrote: >> >> >> >> Thanks, that answers my follow up question! :) >> >> >> >> My concern is that I could have a host off-line for a month say, is >> that >> >> going to cause any issues? >> >> >> >> Thanks, >> >> >> >> Thomas >> >> >> > >> > I think that if in the mean time you don't make any configuration >> changes >> > and you don't update anything, there is no reason to have problems. >> > In case of changes done, it could depend on what they are: are you >> thinking >> > about any particular scenario? >> > >> > >> > _______________________________________________ >> > Users mailing list >> > Users at ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/users >> > >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Sun Feb 11 08:38:53 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sun, 11 Feb 2018 10:38:53 +0200 Subject: [ovirt-users] Ovirt 3.6 to 4.2 upgrade In-Reply-To: References: Message-ID: On Fri, Feb 9, 2018 at 4:06 PM, Gary Lloyd wrote: > Hi > > Is it possible/supported to upgrade from Ovirt 3.6 straight to Ovirt 4.2 ? > No, you go through 4.0, 4.1. > Does live migration still function between the older vdsm nodes and vdsm > nodes with software built against Ovirt 4.2 ? > Yes, keep the cluster level at 3.6. > > We changed a couple of the vdsm python files to enable iscsi multipath on > direct luns. > (It's a fairly simple change to a couple of the python files). > Nice! Can you please contribute those patches to oVirt? Y. > > We've been running it this way since 2012 (Ovirt 3.2). > > Many Thanks > > *Gary Lloyd* > ________________________________________________ > I.T. Systems:Keele University > Finance & IT Directorate > Keele:Staffs:IC1 Building:ST5 5NB:UK > +44 1782 733063 <%2B44%201782%20733073> > ________________________________________________ > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Sun Feb 11 08:41:21 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Sun, 11 Feb 2018 10:41:21 +0200 Subject: [ovirt-users] Issue with 4.2.1 RC and SSL In-Reply-To: References: <4179b0be-6579-d86e-dc2e-e64c5e3cb57b@gmail.com> Message-ID: On Sun, Feb 11, 2018 at 10:26 AM, Yaniv Kaul wrote: > > > On Sun, Feb 11, 2018 at 2:43 AM, ~Stack~ wrote: >> >> On 02/08/2018 06:42 AM, Petr Kotas wrote: >> > Hi Stack, >> >> Greetings Petr >> >> > have you tried it on other linux distributions? Scientific is not >> > officially supported. >> >> No, but SL isn't really any different than CentOS. If anything, we've >> found it adheres closer to RH than CentOS does. >> >> > My guess based on your log is there are somewhere missing certificates, >> > maybe different path?. >> > You can check the paths by the documentation: >> > >> > https://www.ovirt.org/develop/release-management/features/infra/pki/#vdsm >> > >> > Hope this helps. >> >> >> Thanks for the suggestion. It took a while but we dug into it and I >> *think* the problem was because I may have over-written the wrong cert >> file in one of my steps. I'm only about 80% certain of that, but it >> seems to match what we found when we were digging through the log files. >> >> We decided to just start from scratch and my coworker watched and >> confirmed every step. It works! No problems at all this time. Further >> evidence that I goofed _something_ up the first time. > > > We should really have an Ansible role that performs the conversion to > self-signed certificates. > That would make the conversion easier and safer. +1 Not sure "self-signed" is the correct term here. Also the internal engine CA's cert is self-signed. I guess you refer to this: https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/ I'd call it "configure-3rd-party-CA" or something like that. > Y. > >> >> >> Thank you for the suggestion! >> ~Stack~ >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Didi From alukiano at redhat.com Sun Feb 11 08:49:16 2018 From: alukiano at redhat.com (Artyom Lukianov) Date: Sun, 11 Feb 2018 10:49:16 +0200 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <20180209112146.2F2D0E12B8@smtp01.mail.de> References: <20180209112146.2F2D0E12B8@smtp01.mail.de> Message-ID: This option relevant only for the upgrade from 3.6 to 4.0(engine had different OS major versions), it all other cases the upgrade flow very similar to upgrade flow of standard engine environment. 1. Put hosted-engine environment to GlobalMaintenance(you can do it via UI) 2. Update engine packages(# yum update -y) 3. Run engine-setup 4. Disable GlobalMaintenance Best Regards On Fri, Feb 9, 2018 at 1:21 PM, wrote: > Hi, > Could someone explain me at least what "Cluster PROD is at version 4.2 > which is not supported by this upgrade flow. Please fix it before > upgrading." means ? As far as I know 4.2 is the most recent branch > available, isn't it ? > Regards > > > > Le 08-Feb-2018 09:59:32 +0100, mburman at redhat.com a ?crit: > > Not sure i understand from which version you trying to upgrade and what is > the exact upgrade flow, if i got it correctly, it is seems that you > upgraded the hosts to 4.2, but engine still 4.1? > What exactly the upgrade steps, please explain the flow., what have you > done after upgrading the hosts? to what version? > > Cheers) > > On Wed, Feb 7, 2018 at 3:00 PM, wrote: > >> Hi, >> Thanks a lot for your answer. >> >> I applied some updates at node level, but I forgot to upgrade the engine ! >> >> When I try to do so I get a strange error : "Cluster PROD is at version >> 4.2 which is not supported by this upgrade flow. Please fix it before >> upgrading." >> >> Here are the installed packets on my nodes : >> python-ovirt-engine-sdk4-4.2.2-2.el7.centos.x86_64 >> ovirt-imageio-common-1.2.0-1.el7.centos.noarch >> ovirt-hosted-engine-setup-2.2.3-1.el7.centos.noarch >> ovirt-engine-sdk-python-3.6.9.1-1.el7.noarch >> ovirt-setup-lib-1.1.4-1.el7.centos.noarch >> ovirt-release42-4.2.0-1.el7.centos.noarch >> ovirt-imageio-daemon-1.2.0-1.el7.centos.noarch >> ovirt-host-dependencies-4.2.0-1.el7.centos.x86_64 >> ovirt-host-4.2.0-1.el7.centos.x86_64 >> ovirt-host-deploy-1.7.0-1.el7.centos.noarch >> ovirt-hosted-engine-ha-2.2.2-1.el7.centos.noarch >> ovirt-provider-ovn-driver-1.2.2-1.el7.centos.noarch >> ovirt-engine-appliance-4.2-20171219.1.el7.centos.noarch >> ovirt-vmconsole-host-1.0.4-1.el7.noarch >> cockpit-ovirt-dashboard-0.11.3-0.1.el7.centos.noarch >> ovirt-vmconsole-1.0.4-1.el7.noarch >> >> What I am supposed to do ? I see no newer packages available. >> >> Regards >> >> >> >> Le 07-Feb-2018 13:23:43 +0100, mburman at redhat.com a ?crit: >> >> Hi >> >> This is a bug and it was already fixed in 4.2.1.1-0.1.el7 - >> https://bugzilla.redhat.com/show_bug.cgi?id=1528906 >> >> The no default route bug was fixed in - https://bugzilla.redhat.com/ >> show_bug.cgi?id=1477589 >> >> Thanks, >> >> On Wed, Feb 7, 2018 at 1:15 PM, wrote: >> >>> >>> Hi, >>> I am experiencing a new problem : when I try to modify something in the >>> network setup on the second node (added to the cluster after installing the >>> engine on the other one) using the Engine GUI, I get the following error >>> when validating : >>> >>> must match "^\b((25[0-5]|2[0-4]\d|[01]\d\ >>> d|\d?\d)\_){3}(25[0-5]|2[0-4]\d|[01]\d\d|\d?\d)" >>> Attribut : ipConfiguration.iPv4Addresses[0].gateway >>> >>> Moreover, on the general status of ther server, I have a "Host has no >>> default route" alert. >>> >>> The ovirtmgmt network has a defined gateway of course, and the storage >>> network has none because it is not required. Both server have the same >>> setup, with different addresses of course :-) >>> >>> I have not been able to find anything useful in the logs. >>> >>> Is this a bug or am I doing something wrong ? >>> >>> Regards >>> >>> ------------------------------ >>> FreeMail powered by mail.fr >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> >> Michael Burman >> >> Senior Quality engineer - rhv network - redhat israel >> >> Red Hat >> >> >> >> mburman at redhat.com M: 0545355725 IM: mburman >> >> >> >> ------------------------------ >> FreeMail powered by mail.fr >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Sun Feb 11 13:34:43 2018 From: andreil1 at starlett.lv (Andrei V) Date: Sun, 11 Feb 2018 15:34:43 +0200 Subject: [ovirt-users] Maximum time node can be offline. In-Reply-To: References: <1518084724645.71365@leedsbeckett.ac.uk> Message-ID: <5ee01ba1-dd19-b4ca-bb47-03254450e1c5@starlett.lv> On 02/10/2018 05:47 PM, Thomas Letherby wrote: > That's exactly what I needed to know, thanks all. > > I'll schedule a script for the nodes to reboot and patch once every week or > two and then I can let it run without me needing to worry about it. Is this shell or python script with connection to oVirt engine? My 4.2 node don't shutdown properly until I'm shutdown all VMs take it into maintenance mode manually. Shutdown process freezes at certain point. > > Thomas > > On Fri, Feb 9, 2018, 2:26 AM Martin Sivak wrote: > >> Hi, >> >> the hosts are almost stateless and we set up most of what is needed >> during activation. Hosted engine has some configuration stored >> locally, but that is just the path to the storage domain. >> >> I think you should be fine unless you change the network topology >> significantly. I would also install security updates once in while. >> >> We can even shut down the hosts for you when you configure two cluster >> scheduling properties: EnableAutomaticPM and HostsInReserve. >> HostsInReserve should be at least 1 though. It behaves like this, as >> long as the reserve host is empty, we shut down all the other empty >> hosts. And we boot another host once a VM does not fit on other used >> hosts and is places on the running reserve host. That would save you >> the power of just one host, but it would still be highly available (if >> hosted engine and storage allows that too). >> >> Bear in mind that single host cluster is not highly available at all. >> >> Best regards >> >> Martin Sivak >> >> On Fri, Feb 9, 2018 at 8:25 AM, Gianluca Cecchi >> wrote: >>> On Fri, Feb 9, 2018 at 2:30 AM, Thomas Letherby >> wrote: >>>> Thanks, that answers my follow up question! :) >>>> >>>> My concern is that I could have a host off-line for a month say, is that >>>> going to cause any issues? >>>> >>>> Thanks, >>>> >>>> Thomas >>>> >>> I think that if in the mean time you don't make any configuration changes >>> and you don't update anything, there is no reason to have problems. >>> In case of changes done, it could depend on what they are: are you >> thinking >>> about any particular scenario? >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From xrs444 at xrs444.net Sun Feb 11 16:43:02 2018 From: xrs444 at xrs444.net (Thomas Letherby) Date: Sun, 11 Feb 2018 16:43:02 +0000 Subject: [ovirt-users] Maximum time node can be offline. In-Reply-To: References: <1518084724645.71365@leedsbeckett.ac.uk> Message-ID: Thanks Yaniv, that looks like a perfect starting point! Thomas On Sun, Feb 11, 2018, 1:36 AM Yaniv Kaul wrote: > On Sat, Feb 10, 2018 at 5:47 PM, Thomas Letherby > wrote: > >> That's exactly what I needed to know, thanks all. >> >> I'll schedule a script for the nodes to reboot and patch once every week >> or two and then I can let it run without me needing to worry about it. >> > > An example on how to check for updates is available @ > https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upgrade_host.py > Y. > > >> Thomas >> >> On Fri, Feb 9, 2018, 2:26 AM Martin Sivak wrote: >> >>> Hi, >>> >>> the hosts are almost stateless and we set up most of what is needed >>> during activation. Hosted engine has some configuration stored >>> locally, but that is just the path to the storage domain. >>> >>> I think you should be fine unless you change the network topology >>> significantly. I would also install security updates once in while. >>> >>> We can even shut down the hosts for you when you configure two cluster >>> scheduling properties: EnableAutomaticPM and HostsInReserve. >>> HostsInReserve should be at least 1 though. It behaves like this, as >>> long as the reserve host is empty, we shut down all the other empty >>> hosts. And we boot another host once a VM does not fit on other used >>> hosts and is places on the running reserve host. That would save you >>> the power of just one host, but it would still be highly available (if >>> hosted engine and storage allows that too). >>> >>> Bear in mind that single host cluster is not highly available at all. >>> >>> Best regards >>> >>> Martin Sivak >>> >>> On Fri, Feb 9, 2018 at 8:25 AM, Gianluca Cecchi >>> wrote: >>> > On Fri, Feb 9, 2018 at 2:30 AM, Thomas Letherby >>> wrote: >>> >> >>> >> Thanks, that answers my follow up question! :) >>> >> >>> >> My concern is that I could have a host off-line for a month say, is >>> that >>> >> going to cause any issues? >>> >> >>> >> Thanks, >>> >> >>> >> Thomas >>> >> >>> > >>> > I think that if in the mean time you don't make any configuration >>> changes >>> > and you don't update anything, there is no reason to have problems. >>> > In case of changes done, it could depend on what they are: are you >>> thinking >>> > about any particular scenario? >>> > >>> > >>> > _______________________________________________ >>> > Users mailing list >>> > Users at ovirt.org >>> > http://lists.ovirt.org/mailman/listinfo/users >>> > >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From niyazielvan at gmail.com Sun Feb 11 19:47:52 2018 From: niyazielvan at gmail.com (Niyazi Elvan) Date: Sun, 11 Feb 2018 22:47:52 +0300 Subject: [ovirt-users] VM backups - Bacchus Message-ID: Dear Friends, It has been a while I could not have time to work on Bacchus. This weekend I created an ansible playbook to replace the installation procedure. You simply download installer.yml and settings.yml files from git repo and run the installer as "ansible-playbook installer.yml" Please check it at https://github.com/openbacchus/bacchus . I recommend you to run the installer on a fresh VM, which has no MySQL DB or previous installation. Hope this helps to more people and please let me know about your ideas. ps. Regarding oVirt 4.2, I had a chance to look at it and tried the new domain type "Backup Domain". This is really cool feature and I am planning to implement the support in Bacchus. Hopefully, CBT will show up soon and we will have a better world :) King Regards, -- Niyazi Elvan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Sun Feb 11 20:58:45 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Sun, 11 Feb 2018 21:58:45 +0100 Subject: [ovirt-users] Live migration of VM(0 downtime) while Hypervisor goes down in ovirt In-Reply-To: References: <438319960.1968243.1518204324812.ref@mail.yahoo.com> <438319960.1968243.1518204324812@mail.yahoo.com> <927677519.2261450.1518247846805@mail.yahoo.com> Message-ID: What you're looking at is called fault tolerance in other hypervisors. As far as i know, ovirt doesn't implement such solution. But if your system doesn't support failure recovery done by high availability options, you should take in account to revise your application architecture if you want to keep running on ovirt. Luca Il 10 feb 2018 8:31 AM, "Ranjith P" ha scritto: Hi, >>Who's shutting down the hypervisor? (Or perhaps it is shutdown externally, due to overheating or otherwise?) We need a continuous availability of VM's in our production setup. If the hypervisor goes down due to any hardware failure or work load then VM's above hypervisor will reboot and started on available hypervisors. This is normally happening but it disrupting VM's. Can you suggest a solution in this case? Can we achieve this challenge using glusterfs? Thanks & Regards Ranjith Sent from Yahoo Mail on Android On Sat, Feb 10, 2018 at 2:07 AM, Yaniv Kaul wrote: On Fri, Feb 9, 2018 at 9:25 PM, ranjithspr13 at yahoo.com < ranjithspr13 at yahoo.com> wrote: Hi, Anyone can suggest how to setup VM Live migration (without restart vm) while Hypervisor goes down in ovirt? I think there are two parts to achieving this: 1. Have a script that migrates VMs off a specific host. This should be easy to write using the Python/Ruby/Java SDK, Ansible or using REST directly. 2. Having this script run as a service when a host shuts down, in the right order - well before libvirt and VDSM shut down, and would be fast enough not to be terminated by systemd. This is a bit more challenging. Who's shutting down the hypervisor? (Or perhaps it is shutdown externally, due to overheating or otherwise?) Y. Using glusterfs is it possible? Then how? Thanks & Regards Ranjith Sent from Yahoo Mail on Android ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From aeR7Re at protonmail.com Sun Feb 11 21:29:41 2018 From: aeR7Re at protonmail.com (aeR7Re) Date: Sun, 11 Feb 2018 16:29:41 -0500 Subject: [ovirt-users] Network Topologies Message-ID: Hello, I'm looking for some advice on or even just some examples of how other oVirt users have configured networking inside their clusters. Currently we're running a cluster with hosts spread across multiple racks in our DC, with layer 2 spanned between them for VM networks. While this is functional, it's 100% not ideal as there's multiple single points of failure and at some point someone is going to accidentally loop it :) What we're after is a method of providing a VM network across multiple racks where there are no single points of failure. We've got layer 3 switches in racks capable of running an IGP/EGP. Current ideas: - Run a routing daemon on each VM and have it advertise a /32 to the distribution switch - OVN for layer 2 between hosts + potentially VRRP or similar on the distribution switch So as per my original paragraph, any advice on the most appropriate network topology for an oVirt cluster? or how have you set up your networks? Thank you Sent with [ProtonMail](https://protonmail.com) Secure Email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From i.am.stack at gmail.com Sun Feb 11 21:41:39 2018 From: i.am.stack at gmail.com (~Stack~) Date: Sun, 11 Feb 2018 15:41:39 -0600 Subject: [ovirt-users] Issue with 4.2.1 RC and SSL In-Reply-To: References: <4179b0be-6579-d86e-dc2e-e64c5e3cb57b@gmail.com> Message-ID: <9c8ad0ff-9510-d524-9dc6-310666264876@gmail.com> On 02/11/2018 02:41 AM, Yedidyah Bar David wrote: > On Sun, Feb 11, 2018 at 10:26 AM, Yaniv Kaul wrote: >> >> >> On Sun, Feb 11, 2018 at 2:43 AM, ~Stack~ wrote: [snip] >>> We decided to just start from scratch and my coworker watched and >>> confirmed every step. It works! No problems at all this time. Further >>> evidence that I goofed _something_ up the first time. >> >> >> We should really have an Ansible role that performs the conversion to >> self-signed certificates. >> That would make the conversion easier and safer. > > +1 > > Not sure "self-signed" is the correct term here. Also the internal > engine CA's cert is self-signed. > > I guess you refer to this: > > https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/ > > I'd call it "configure-3rd-party-CA" or something like that. Greetings, Another +1 from me (obviously! :-). I also agree in that we are not doing a self-signed cert, but rather we've purchased a cert from one of the big-name-CA-vendors that is valid for our domain. "configure-3rd-party-CA" makes more sense to me. Lastly, that is the link that I used for a guide. Thanks! ~Stack~ -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From chad at talkjesus.com Mon Feb 12 01:53:50 2018 From: chad at talkjesus.com (Talk Jesus) Date: Sun, 11 Feb 2018 20:53:50 -0500 Subject: [ovirt-users] Few Questions on New Install Message-ID: <00f901d3a3a4$548ff180$fdafd480$@talkjesus.com> Greetings, Just installed Ovirt: Software Version:4.2.0.2-1.el7.centos How Do I: - add a subnet of IPv4 to assign to VMs - download (or import) basic Linux templates like Centos 7, Ubuntu 16 even if using minimal iso - import from SolusVM based KVM nodes Does oVirt support bulk IPv4 assignment to VMs? If I wish to assign say a full /26 subnet of IPv4 to VM #1, is this a one click option? Thank you. I read the docs, but everything is a bit confusing for me. From emayoral at arsys.es Mon Feb 12 06:38:20 2018 From: emayoral at arsys.es (Eduardo Mayoral) Date: Mon, 12 Feb 2018 07:38:20 +0100 Subject: [ovirt-users] Few Questions on New Install In-Reply-To: <00f901d3a3a4$548ff180$fdafd480$@talkjesus.com> References: <00f901d3a3a4$548ff180$fdafd480$@talkjesus.com> Message-ID: AFAIK there is no DHCP integrated for ovirt, you will have to deploy a DHCP to assign the IP addresses to the VMs (or use static config). About the template imports, an out-of-the-box ovirt installation has an storage domain called "ovirt-image-repository". There you will find many popular templates ready for import. If you do not find there what you want, you can upload the iso, install one VM, optionally install cloud-init on it, and convert it to a template. Never used SolusVM. If it uses libvirt you may be able to import the VM using the import VM dialog and choosing "KVM (via libvirt)", otherwise, you will probably have to copy the VM disks to an storage domain so oVirt can import them, and then recreate a VM usign the imported disks. If the source VM is KVM based, expect no compatibility issues. Eduardo Mayoral Jimeno (emayoral at arsys.es) Administrador de sistemas. Departamento de Plataformas. Arsys internet. +34 941 620 145 ext. 5153 On 12/02/18 02:53, Talk Jesus wrote: > Greetings, > > Just installed Ovirt: > Software Version:4.2.0.2-1.el7.centos > > How Do I: > - add a subnet of IPv4 to assign to VMs > - download (or import) basic Linux templates like Centos 7, Ubuntu 16 even > if using minimal iso > - import from SolusVM based KVM nodes > > Does oVirt support bulk IPv4 assignment to VMs? If I wish to assign say a > full /26 subnet of IPv4 to VM #1, is this a one click option? > > Thank you. I read the docs, but everything is a bit confusing for me. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From jbelka at redhat.com Mon Feb 12 07:06:39 2018 From: jbelka at redhat.com (Jiri Belka) Date: Mon, 12 Feb 2018 02:06:39 -0500 (EST) Subject: [ovirt-users] Network configuration validation error In-Reply-To: References: <20180209112146.2F2D0E12B8@smtp01.mail.de> Message-ID: <67544456.214341.1518419199946.JavaMail.zimbra@redhat.com> > This option relevant only for the upgrade from 3.6 to 4.0(engine had > different OS major versions), it all other cases the upgrade flow very > similar to upgrade flow of standard engine environment. > > > 1. Put hosted-engine environment to GlobalMaintenance(you can do it via > UI) > 2. Update engine packages(# yum update -y) > 3. Run engine-setup > 4. Disable GlobalMaintenance > > Could someone explain me at least what "Cluster PROD is at version 4.2 which > is not supported by this upgrade flow. Please fix it before upgrading." > means ? As far as I know 4.2 is the most recent branch available, isn't it ? I have no idea where did you get "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." Please do not cut output and provide exact one. IIUC you should do 'yum update ovirt\*setup\*' and then 'engine-setup' and only after it would finish successfully you would do 'yum -y update'. Maybe that's your problem? Jiri From didi at redhat.com Mon Feb 12 07:09:59 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 12 Feb 2018 09:09:59 +0200 Subject: [ovirt-users] Issue with 4.2.1 RC and SSL In-Reply-To: <9c8ad0ff-9510-d524-9dc6-310666264876@gmail.com> References: <4179b0be-6579-d86e-dc2e-e64c5e3cb57b@gmail.com> <9c8ad0ff-9510-d524-9dc6-310666264876@gmail.com> Message-ID: On Sun, Feb 11, 2018 at 11:41 PM, ~Stack~ wrote: > On 02/11/2018 02:41 AM, Yedidyah Bar David wrote: >> On Sun, Feb 11, 2018 at 10:26 AM, Yaniv Kaul wrote: >>> >>> >>> On Sun, Feb 11, 2018 at 2:43 AM, ~Stack~ wrote: > > [snip] > >>>> We decided to just start from scratch and my coworker watched and >>>> confirmed every step. It works! No problems at all this time. Further >>>> evidence that I goofed _something_ up the first time. >>> >>> >>> We should really have an Ansible role that performs the conversion to >>> self-signed certificates. >>> That would make the conversion easier and safer. >> >> +1 >> >> Not sure "self-signed" is the correct term here. Also the internal >> engine CA's cert is self-signed. >> >> I guess you refer to this: >> >> https://www.ovirt.org/documentation/admin-guide/appe-oVirt_and_SSL/ >> >> I'd call it "configure-3rd-party-CA" or something like that. > > Greetings, > > Another +1 from me (obviously! :-). > > I also agree in that we are not doing a self-signed cert, but rather > we've purchased a cert from one of the big-name-CA-vendors that is valid > for our domain. "configure-3rd-party-CA" makes more sense to me. Nit: This big-name-CA-vendors CA's cert is most likely also self-signed, so it's not a mistake to call it "self-signed". The difference between "self-signed by _me_" and "self-signed by big-name" is mainly a matter of trust and business relations (between that big-name and you, big-name and the OS/browser vendors, etc.) and not a technical one. If you loan a friend $100 for a month, the difference between you and a big bank is very similar to that above difference... > > Lastly, that is the link that I used for a guide. > > Thanks! > ~Stack~ > > > -- Didi From russell.wecker at gmail.com Mon Feb 12 07:22:50 2018 From: russell.wecker at gmail.com (Russell Wecker) Date: Mon, 12 Feb 2018 15:22:50 +0800 Subject: [ovirt-users] Hosted-Engine mount .iso file CLI Message-ID: I have a hosted engine setup that when I ran the system updates for it, and it will not boot. I would like to have it boot from a rescue CD image so i can fix it i have copied the /var/run/ovirt-hosted-engine-ha/vm.conf to /root and modified it however i cannot seem to find the exact options to configure the file for .iso my current settings are devices={index:2,iface:ide,shared:false,readonly:true,deviceId:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{controller:0,target:0,unit:0,bus:1,type:drive},device:cdrom,path:,type:disk} How do i change it to boot from local .iso. Thanks Any help would be most appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Mon Feb 12 08:15:44 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 12 Feb 2018 09:15:44 +0100 Subject: [ovirt-users] Hosted-Engine mount .iso file CLI In-Reply-To: References: Message-ID: On Mon, Feb 12, 2018 at 8:22 AM, Russell Wecker wrote: > I have a hosted engine setup that when I ran the system updates for it, > and it will not boot. > Did you set global maintenance mode at update time? If not, ovirt-ha-agent could try to restart the engine VM for HA reasons in the middle of the update with potentially dangerous results. > I would like to have it boot from a rescue CD image so i can fix it i have > copied the /var/run/ovirt-hosted-engine-ha/vm.conf to /root and modified > it however i cannot seem to find the exact options to configure the file > for .iso my current settings are > > devices={index:2,iface:ide,shared:false,readonly:true, > deviceId:8c3179ac-b322-4f5c-9449-c52e3665e0ae,address:{ > controller:0,target:0,unit:0,bus:1,type:drive},device: > cdrom,path:,type:disk} > How do i change it to boot from local .iso. > > Hi, you can add 'bootOrder:1' and the path to your iso image after 'path:' on your cdrom device line in a local copy of vm.conf. You have also to remove 'bootOrder:1' from the disk device. So you can try starting the VM with hosted-engine --vm-start --vm-conf=/your/custom/vm.conf Good luck > > Thanks > > Any help would be most appreciated. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Mon Feb 12 08:19:09 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Mon, 12 Feb 2018 09:19:09 +0100 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <67544456.214341.1518419199946.JavaMail.zimbra@redhat.com> References: <67544456.214341.1518419199946.JavaMail.zimbra@redhat.com> Message-ID: <20180212081909.B5F4BE2266@smtp01.mail.de> Hi, This is the whole message. Here you get the whole output of the process. I runned these commands on the node hosting the engine vm. But if there is a more efficient workflow, I am interrested in it ! # yum -y update Modules complmentaires chargs : fastestmirror base | 3.6 kB 00:00:00 centos-sclo-rh-release | 2.9 kB 00:00:00 extras | 3.4 kB 00:00:00 ovirt-4.2 | 3.0 kB 00:00:00 ovirt-4.2-centos-gluster312 | 2.9 kB 00:00:00 ovirt-4.2-centos-opstools | 2.9 kB 00:00:00 ovirt-4.2-centos-ovirt42 | 2.9 kB 00:00:00 ovirt-4.2-centos-qemu-ev | 2.9 kB 00:00:00 ovirt-4.2-epel/x86_64/metalink | 26 kB 00:00:00 ovirt-4.2-virtio-win-latest | 3.0 kB 00:00:00 updates | 3.4 kB 00:00:00 Loading mirror speeds from cached hostfile * base: fr.mirror.babylon.network * extras: ftp.ciril.fr * ovirt-4.2: mirror.slu.cz * ovirt-4.2-epel: mirror.uv.es * updates: mirror.plusserver.com No packages marked for update # hosted-engine --upgrade-appliance [ INFO ] Stage: Initializing [ INFO ] Stage: Environment setup During customization use CTRL-D to abort. ================================================================================== Welcome to the oVirt Self Hosted Engine setup/Upgrade tool. Please refer to the oVirt install guide: https://www.ovirt.org/documentation/how-to/hosted-engine/#fresh-install Please refer to the oVirt upgrade guide: https://www.ovirt.org/documentation/how-to/hosted-engine/#upgrade-hosted-engine ================================================================================== Continuing will upgrade the engine VM running on this hosts deploying and configuring a new appliance. If your engine VM is already based on el7 you can also simply upgrade the engine there. This procedure will create a new disk on the hosted-engine storage domain and it will backup there the content of your current engine VM disk. The new el7 based appliance will be deployed over the existing disk destroying its content; at any time you will be able to rollback using the content of the backup disk. You will be asked to take a backup of the running engine and copy it to this host. The engine backup will be automatically injected and recovered on the new appliance. Are you sure you want to continue? (Yes, No)[Yes]: Yes Configuration files: [] Log file: /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180212090846-pse285.log Version: otopi-1.7.5 (otopi-1.7.5-1.el7.centos) [ INFO ] Detecting available oVirt engine appliances [ INFO ] Stage: Environment packages setup [ INFO ] Stage: Programs detection [ INFO ] Stage: Environment setup [ INFO ] Checking maintenance mode [ INFO ] The engine VM is running on this host [ INFO ] Stage: Environment customization --== STORAGE CONFIGURATION ==-- [ INFO ] Answer file successfully loaded [ INFO ] Acquiring internal CA cert from the engine [ INFO ] The following CA certificate is going to be used, please immediately interrupt if not correct: [ INFO ] Issuer: C=US, O=pfm-ad.pfm.loc, CN=pfm-ovirt-engine.pfm-ad.pfm.loc.60769, Subject: C=US, O=pfm-ad.pfm.loc, CN=pfm-ovirt-engine.pfm-ad.pfm.loc.60769, Fingerprint (SHA-1): 2815C0CB4D6E05B7503917173F0D65B452C9D3DC --== HOST NETWORK CONFIGURATION ==-- [ INFO ] Checking SPM status on this host [ INFO ] Connecting to Engine Enter engine admin username [admin at internal]: Enter engine admin password: [ INFO ] Connecting to Engine [ INFO ] This upgrade tool is running on the SPM host [ INFO ] Bridge ovirtmgmt already created [WARNING] Unable to uniquely detect the interface where Bridge ovirtmgmt has been created on, [u'bond0', u'vnet0', u'vnet1', u'vnet2'] appear to be valid alternatives --== VM CONFIGURATION ==-- The following appliance have been found on your system: [1] - The oVirt Engine Appliance image (OVA) - 4.2-20171219.1.el7.centos [2] - Directly select an OVA file Please select an appliance (1, 2) [1]: 1 [ INFO ] Verifying its sha1sum [ INFO ] Checking OVF archive content (could take a few minutes depending on archive size) [ INFO ] Checking OVF XML content (could take a few minutes depending on archive size) Please specify the size of the VM disk in GB: [50]: [ INFO ] Connecting to Engine [ INFO ] The hosted-engine storage domain has enough free space to contain a new backup disk. [ INFO ] Checking version requirements [ INFO ] Checking metadata area [ INFO ] Hosted-engine configuration is at a compatible level [ INFO ] Connecting to Engine [ ERROR ] Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading. [ ERROR ] Failed to execute stage 'Environment customization': Unsupported cluster level [ INFO ] Stage: Clean up [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Hosted Engine upgrade failed Le 12-Feb-2018 08:06:43 +0100, jbelka at redhat.com a crit: > This option relevant only for the upgrade from 3.6 to 4.0(engine had > different OS major versions), it all other cases the upgrade flow very > similar to upgrade flow of standard engine environment. > > > 1. Put hosted-engine environment to GlobalMaintenance(you can do it via > UI) > 2. Update engine packages(# yum update -y) > 3. Run engine-setup > 4. Disable GlobalMaintenance > > Could someone explain me at least what "Cluster PROD is at version 4.2 which > is not supported by this upgrade flow. Please fix it before upgrading." > means ? As far as I know 4.2 is the most recent branch available, isn't it ? I have no idea where did you get "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." Please do not cut output and provide exact one. IIUC you should do 'yum update ovirt\*setup\*' and then 'engine-setup' and only after it would finish successfully you would do 'yum -y update'. Maybe that's your problem? Jiri ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Mon Feb 12 08:43:29 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Mon, 12 Feb 2018 09:43:29 +0100 Subject: [ovirt-users] Multiple 'scsi' controllers with index '0'. In-Reply-To: References: Message-ID: <20180212084329.E4CE8E2266@smtp01.mail.de> Hi, I have tried this but it didn't solved the problem. I removed the disk and tried to boot with an ISO but no more success. As I need to work on what was installed on this disk, I tried the most violent but efficient solution : destroying the VM and recreating it, keeping its mac address. Le 09-Feb-2018 14:29:41 +0100, gianluca.cecchi at gmail.com a crit: Il 09 Feb 2018 13:50, ha scritto: I have just done it. Is it possible to tweak this XML file (where ?) in order to get a working VM ? Regards Le 09-Feb-2018 12:44:08 +0100, fromani at redhat.com a crit: Hi, could you please file a bug? Please attach the failing XML, you should find it pretty easily in the Vdsm logs. Thanks, On 02/09/2018 12:08 PM, spfma.tech at e.mail.fr wrote: Hi, I just wanted to increase the number of CPUs for a VM and after validating, I got the following error when I try to start it: VM vm-test is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. I am sure it is a bug, but for now, what can I do in order to remove or edit conflicting devices definitions ? I need to be able to start this machine. 4.2.0.2-1.el7.centos (as I still don't manage to update the hosted engine to something newer) Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Francesco Romani Senior SW Eng., Virtualization R&D Red Hat IRC: fromani github: @fromanirh ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users I seem to remember a similar problem and that deactivating disks of the VM and the then activating them again corrected the problem. Or in case that doesn't work, try to remove disks and Then readd from the floating disk pane... Hih, gianluca ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehaas at redhat.com Mon Feb 12 09:03:20 2018 From: ehaas at redhat.com (Edward Haas) Date: Mon, 12 Feb 2018 11:03:20 +0200 Subject: [ovirt-users] Using network assigned to VM on CentOS host? In-Reply-To: References: Message-ID: I could not understand what you exactly have working now and what you are looking to add. Perhaps share a diagram or try to describe it in more details. Thanks, Edy. On Sun, Feb 11, 2018 at 3:00 AM, Wesley Stewart wrote: > This might be a stupid question. But I am testing out a 10Gb network > directly connected to my Freenas box using a Cat6 crossover cable. > > I setup the connection (on device eno4) and called the network "Crossover" > in oVirt. > > I dont have DHCP on this, but I can easy assign VMs a NIC on the > "Crossover" network, assign them an ip address (10.10.10.x) and everything > works fine. But I was curious about doing this for the CentOS host as > well. I want to test out hosting VM's on the NFS share over the 10Gb > network but I wasn't quite sure how to do this without breaking other > connections and I did not want to do anything incorrectly. > > I appreciate your feedback! I apologize if this is a stupid question. > > Running oVirt 4.1.8 on CentOS 7.4 > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Mon Feb 12 09:09:32 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Mon, 12 Feb 2018 10:09:32 +0100 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <67544456.214341.1518419199946.JavaMail.zimbra@redhat.com> References: <67544456.214341.1518419199946.JavaMail.zimbra@redhat.com> Message-ID: <20180212090932.1C55BE2266@smtp01.mail.de> Le 12-Feb-2018 08:06:43 +0100, jbelka at redhat.com a crit: > This option relevant only for the upgrade from 3.6 to 4.0(engine had > different OS major versions), it all other cases the upgrade flow very > similar to upgrade flow of standard engine environment. > > > 1. Put hosted-engine environment to GlobalMaintenance(you can do it via > UI) > 2. Update engine packages(# yum update -y) > 3. Run engine-setup > 4. Disable GlobalMaintenance > So I followed these steps connected in the engine VM and didn't get any error message. But the version showed in the GUI is still 4.2.0.2-1.el7.centos. Yum had no newer packages to install. And I still have the "no default route" and network validation problems. Regards > Could someone explain me at least what "Cluster PROD is at version 4.2 which > is not supported by this upgrade flow. Please fix it before upgrading." > means ? As far as I know 4.2 is the most recent branch available, isn't it ? I have no idea where did you get "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." Please do not cut output and provide exact one. IIUC you should do 'yum update ovirt\*setup\*' and then 'engine-setup' and only after it would finish successfully you would do 'yum -y update'. Maybe that's your problem? Jiri ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mburman at redhat.com Mon Feb 12 09:42:26 2018 From: mburman at redhat.com (Michael Burman) Date: Mon, 12 Feb 2018 11:42:26 +0200 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <20180212090932.1C55BE2266@smtp01.mail.de> References: <67544456.214341.1518419199946.JavaMail.zimbra@redhat.com> <20180212090932.1C55BE2266@smtp01.mail.de> Message-ID: "no default route" bug was fixed only on 4.2.1 Your current version doesn't have the fix On Mon, Feb 12, 2018 at 11:09 AM, wrote: > > > > > Le 12-Feb-2018 08:06:43 +0100, jbelka at redhat.com a ?crit: > > > This option relevant only for the upgrade from 3.6 to 4.0(engine had > > different OS major versions), it all other cases the upgrade flow very > > similar to upgrade flow of standard engine environment. > > > > > > 1. Put hosted-engine environment to GlobalMaintenance(you can do it via > > UI) > > 2. Update engine packages(# yum update -y) > > 3. Run engine-setup > > 4. Disable GlobalMaintenance > > > > > So I followed these steps connected in the engine VM and didn't get any > error message. But the version showed in the GUI is > still 4.2.0.2-1.el7.centos. Yum had no newer packages to install. And I > still have the "no default route" and network validation problems. > Regards > > > Could someone explain me at least what "Cluster PROD is at version 4.2 > which > > is not supported by this upgrade flow. Please fix it before upgrading." > > means ? As far as I know 4.2 is the most recent branch available, isn't > it ? > > I have no idea where did you get > > "Cluster PROD is at version 4.2 which is not supported by this upgrade > flow. Please fix it before upgrading." > > Please do not cut output and provide exact one. > > IIUC you should do 'yum update ovirt\*setup\*' and then 'engine-setup' > and only after it would finish successfully you would do 'yum -y update'. > Maybe that's your problem? > > Jiri > > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Mon Feb 12 10:17:21 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Mon, 12 Feb 2018 11:17:21 +0100 Subject: [ovirt-users] Info about windows guest performance In-Reply-To: References: <0F23B85A-58FB-40E1-B10F-F12E78ACF806@redhat.com> Message-ID: On Sun, Feb 11, 2018 at 9:33 AM, Yaniv Kaul wrote: > > > On Sat, Feb 10, 2018 at 8:41 PM, Gianluca Cecchi < > gianluca.cecchi at gmail.com> wrote: > >> On Fri, Feb 9, 2018 at 4:32 PM, Gianluca Cecchi < >> gianluca.cecchi at gmail.com> wrote: >> >>> >>> >>> If I edit the VM, in general settings I see "Other OS" as operating >>> system. >>> In General subtab after selecting the VM in "Virtual Machines" tab I >>> again see "Other OS" in "Operating System" and the field "Origin" filled >>> with the value "VMware" >>> >>> During virt-v2v it seems it was recognized as Windows 2008 though... >>> >>> libguestfs: trace: v2v: hivex_value_utf8 = "Windows Server 2008 R2 >>> Enterprise" >>> libguestfs: trace: v2v: hivex_value_key 11809408 >>> >>> I can send all the log if it can help. >>> Thanks, >>> Gianluca >>> >> >> >> So it seems it has been a problem with virt-v2v conversion, because if I >> shutdown the VM and set it to Windows 2008 R2 x86_64 and optimized for >> server and I run it, I get this flag for the cpu: >> > > A new virt-v2v was just released, worth testing it. It has some nice > features, and perhaps fixes the above too. > For example: > Virt-v2v now installs Windows 10 / Windows Server 2016 virtio block > drivers correctly (Pavel Butsykin, Kun Wei). > > Virt-v2v now installs virtio-rng, balloon and pvpanic drivers, and > correctly sets this in the target hypervisor metadata for > hypervisors > which support that (Tom?? Golembiovsk?). > > Virt-v2v now installs both legacy and modern virtio keys in the > Windows > registry (Ladi Prosek). > > > Thanks for the info. In the mean time after installing virtio-win on proxy host I retried with the same version of virt-v2v provided with 4.1.9: virt-v2v-1.36.3-6.el7_4.3.x86_64 and then selecting and injecting it in the import window and the VM correctly starts with virtio drivers (version 61.74.104.14100) after a reboot requested (first starts remains in black window for a couple of minutes and then asks to restart) It also has qxl drivers (6.1.0.1024) But the VM remains as "Other OS" and obviously has no hv_ optimization. I'm going to try the new version as you suggested. It would be nice also to have virtio-scsi option and not only the virtio option directly in import function.... BTW: I see that in case a target VM with the same name of source exists, I'm given an error and I can't change the VM name on destination.... This seems to me a big limitation, because forces to rename the source VM or rename a pre-existing VM at destination with the same name.... > >> -cpu Westmere,vmx=on,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff >> >> BTW: what are the other flags for: >> >> >> hv_spinlocks=0x1fff >> hv_relaxed >> hv_vapic >> ? >> > > These are the enlightenment that allow Windows guests to run faster (hv = > hyper-v). > See[1] > Y. > > [1] http://blog.wikichoon.com/2014/07/enabling-hyper-v- > enlightenments-with-kvm.html > > Thanks for the Cole link Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From reznikov_aa at soskol.com Mon Feb 12 10:27:12 2018 From: reznikov_aa at soskol.com (Reznikov Alexei) Date: Mon, 12 Feb 2018 13:27:12 +0300 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf In-Reply-To: References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> Message-ID: <753d9e75-1dfb-0355-d4db-5eec8899db68@soskol.com> 09.02.2018 21:13, Alex K ?????: > Hi, > > did you select "Deploy" when adding the new host? > > See attached. > > Inline image 2 > > Thanx, > Alex > > On Fri, Feb 9, 2018 at 9:53 AM, Reznikov Alexei > > wrote: > > Hi all! > > After upgrade from ovirt 4.0 to 4.1, a have trouble add to next > HostedEngine host to my cluster via webui... host add succesfully > and become up, but HE not active in this host. > > log's from trouble host > # cat agent.log > > KeyError: 'Configuration value not found: > file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway' > > # cat /etc/ovirt-hosted-engine/hosted-engine.conf > ca_cert=/etc/pki/vdsm/libvirt-spice/ca-cert.pem > host_id=2 > > log deploy from engine in attach. > > trouble host: > ovirt-hosted-engine-setup-2.1.4-1.el7.centos.noarch > ovirt-host-deploy-1.6.7-1.el7.centos.noarch > vdsm-4.19.45-1.el7.centos.x86_64 > CentOS Linux release 7.4.1708 (Core) > > engine host: > ovirt-release41-4.1.9-1.el7.centos.noarch > ovirt-engine-4.1.9.1-1.el7.centos.noarch > CentOS Linux release 7.4.1708 (Core) > > Please help me fix it. > > Thanx, Alex. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > Yes, off course, i did this. Thanx, Alex. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Deploy.png Type: image/png Size: 16466 bytes Desc: not available URL: From matthias.leopold at meduniwien.ac.at Mon Feb 12 11:55:50 2018 From: matthias.leopold at meduniwien.ac.at (Matthias Leopold) Date: Mon, 12 Feb 2018 12:55:50 +0100 Subject: [ovirt-users] effectiveness of "discard=unmap" In-Reply-To: References: <7daec1b4-dc4a-0527-7c2e-00bf21455ae0@meduniwien.ac.at> Message-ID: <6b9c63fd-2663-509e-7568-24fd81ed8275@meduniwien.ac.at> Hi Idan, thanks for your answer. But i'm still confused, because i thought that the content of /sys/block/dm-X/queue/discard* in the VM OS should depend on the setting of the "discard=(unmap|ignore)" setting in the qemu-kvm command. Unexpectedly it's the same in both cases (it's >0, saying discard is 'on'). I was then trying to inquire about the TRIM/UNMAP capability of block devices in the VM with "sdparm -p lbp /dev/sdx", but i always get "Logical block provisioning (SBC) mode subpage failed". I know i can (and should) look at the storage array to see if TRIM/UNMAP _actually_ works, but documentation (https://www.kernel.org/doc/Documentation/block/queue-sysfs.txt) says it should be visible beforehand to my understanding. Regards Matthias Am 2018-02-11 um 08:55 schrieb Idan Shaby: > Hi Matthias, > > When the guest executes a discard call of any variation (fstrim, > blkdiscard, etc.), the underlying thinly provisioned LUN is the one that > changes -? it returns the unused blocks to the storage array and gets > smaller. > Therefore, no change is visible to the guest OS. > If you want to check what has changed, go to the storage array and check > what's the size of the underlying thinly provisioned LUN before and > after the discard call. > > The answer for your question and some more information can be found in > the feature page [1] (needs a bit of an update, but most of it is still > relevant). > If you got any further questions, please don't hesitate to ask. > > > Regards, > Idan > > [1] Pass discard from guest to underlying storage - > https://www.ovirt.org/develop/release-management/features/storage/pass-discard-from-guest-to-underlying-storage/ > > On Thu, Feb 8, 2018 at 2:08 PM, Matthias Leopold > > wrote: > > Hi, > > i'm sorry to bother you again with my ignorance of the DISCARD > feature for block devices in general. > > after finding several ways to enable "discard=unmap" for oVirt disks > (via standard GUI option for iSCSI disks or via "diskunmap" custom > property for Cinder disks) i wanted to check in the guest for the > effectiveness of this feature. to my surprise i couldn't find a > difference between Linux guests with and without "discard=unmap" > enabled in the VM. "lsblk -D" reports the same in both cases and > also fstrim/blkdiscard commands appear to work with no difference. > Why is this? Do i have to look at the underlying storage to find out > what really happens? Shouldn't this be visible in the guest OS? > > thx > matthias > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -- Matthias Leopold IT Systems & Communications Medizinische Universit?t Wien Spitalgasse 23 / BT 88 /Ebene 00 A-1090 Wien Tel: +43 1 40160-21241 Fax: +43 1 40160-921200 From ishaby at redhat.com Mon Feb 12 12:40:54 2018 From: ishaby at redhat.com (Idan Shaby) Date: Mon, 12 Feb 2018 14:40:54 +0200 Subject: [ovirt-users] effectiveness of "discard=unmap" In-Reply-To: <6b9c63fd-2663-509e-7568-24fd81ed8275@meduniwien.ac.at> References: <7daec1b4-dc4a-0527-7c2e-00bf21455ae0@meduniwien.ac.at> <6b9c63fd-2663-509e-7568-24fd81ed8275@meduniwien.ac.at> Message-ID: On Mon, Feb 12, 2018 at 1:55 PM, Matthias Leopold < matthias.leopold at meduniwien.ac.at> wrote: > Hi Idan, > > thanks for your answer. But i'm still confused, because i thought that the > content of /sys/block/dm-X/queue/discard* in the VM OS should depend on the > setting of the "discard=(unmap|ignore)" setting in the qemu-kvm command. > Unexpectedly it's the same in both cases (it's >0, saying discard is 'on'). > I was then trying to inquire about the TRIM/UNMAP capability of block > devices in the VM with "sdparm -p lbp /dev/sdx", but i always get "Logical > block provisioning (SBC) mode subpage failed". > The file /sys/block/dm-X/queue/discard_max_bytes in sysfs tells you whether your underlying storage supports discard. The flag discard=unmap of the VM in qemu means that qemu will not throw away the UNMAP commands comming from the guest OS (by default it does throw them away). >From what I know, the file in sysfs and the VM flag are not related. > > I know i can (and should) look at the storage array to see if TRIM/UNMAP > _actually_ works, but documentation (https://www.kernel.org/doc/Do > cumentation/block/queue-sysfs.txt) says it should be visible beforehand > to my understanding. > > Regards > Matthias > > Am 2018-02-11 um 08:55 schrieb Idan Shaby: > >> Hi Matthias, >> >> When the guest executes a discard call of any variation (fstrim, >> blkdiscard, etc.), the underlying thinly provisioned LUN is the one that >> changes - it returns the unused blocks to the storage array and gets >> smaller. >> Therefore, no change is visible to the guest OS. >> If you want to check what has changed, go to the storage array and check >> what's the size of the underlying thinly provisioned LUN before and after >> the discard call. >> >> The answer for your question and some more information can be found in >> the feature page [1] (needs a bit of an update, but most of it is still >> relevant). >> If you got any further questions, please don't hesitate to ask. >> >> >> Regards, >> Idan >> >> [1] Pass discard from guest to underlying storage - >> https://www.ovirt.org/develop/release-management/features/st >> orage/pass-discard-from-guest-to-underlying-storage/ >> >> On Thu, Feb 8, 2018 at 2:08 PM, Matthias Leopold < >> matthias.leopold at meduniwien.ac.at > iwien.ac.at>> wrote: >> >> Hi, >> >> i'm sorry to bother you again with my ignorance of the DISCARD >> feature for block devices in general. >> >> after finding several ways to enable "discard=unmap" for oVirt disks >> (via standard GUI option for iSCSI disks or via "diskunmap" custom >> property for Cinder disks) i wanted to check in the guest for the >> effectiveness of this feature. to my surprise i couldn't find a >> difference between Linux guests with and without "discard=unmap" >> enabled in the VM. "lsblk -D" reports the same in both cases and >> also fstrim/blkdiscard commands appear to work with no difference. >> Why is this? Do i have to look at the underlying storage to find out >> what really happens? Shouldn't this be visible in the guest OS? >> >> thx >> matthias >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> > -- > Matthias Leopold > IT Systems & Communications > Medizinische Universit?t Wien > Spitalgasse 23 / BT 88 /Ebene 00 > A-1090 Wien > Tel: +43 1 40160-21241 > Fax: +43 1 40160-921200 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.leopold at meduniwien.ac.at Mon Feb 12 13:24:15 2018 From: matthias.leopold at meduniwien.ac.at (Matthias Leopold) Date: Mon, 12 Feb 2018 14:24:15 +0100 Subject: [ovirt-users] effectiveness of "discard=unmap" In-Reply-To: References: <7daec1b4-dc4a-0527-7c2e-00bf21455ae0@meduniwien.ac.at> <6b9c63fd-2663-509e-7568-24fd81ed8275@meduniwien.ac.at> Message-ID: Am 2018-02-12 um 13:40 schrieb Idan Shaby: > On Mon, Feb 12, 2018 at 1:55 PM, Matthias Leopold > > wrote: > > Hi Idan, > > thanks for your answer. But i'm still confused, because i thought > that the content of /sys/block/dm-X/queue/discard* in the VM OS > should depend on the setting of the "discard=(unmap|ignore)" setting > in the qemu-kvm command. Unexpectedly it's the same in both cases > (it's >0, saying discard is 'on'). I was then trying to inquire > about the TRIM/UNMAP capability of block devices in the VM with > "sdparm -p lbp /dev/sdx", but i always get "Logical block > provisioning (SBC) mode subpage failed". > > The file /sys/block/dm-X/queue/discard_max_bytes in sysfs tells you > whether your underlying storage supports discard. > The flag discard=unmap of the VM in qemu means that qemu will not throw > away the UNMAP commands comming from the guest OS (by default it does > throw them away). > From what I know, the file in sysfs and the VM flag are not related. Thank you, i will now finally accept it ;-) Just for the records: This is where i got my info from: https://chrisirwin.ca/posts/discard-with-kvm/ Matthias From jonbae77 at gmail.com Mon Feb 12 15:14:18 2018 From: jonbae77 at gmail.com (Jon bae) Date: Mon, 12 Feb 2018 16:14:18 +0100 Subject: [ovirt-users] Can't add/remove usb device to VM Message-ID: Hello, I run oVirt 4.2.1, in 4.1 I added a USB device to a VM and now I wanted to change this device to a different VM. But I found out, that I'm not able to remove this device nor I'm able to add any device to any VM. The device list is empty. I found a bug report, what describe this error: https://bugzilla.redhat.com/show_bug.cgi?id=1531847 Is there a solution for that? The USB device is a hardware dongle and it is very impotent for us to change this. Any workaround is welcome! Regards Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lveyde at redhat.com Mon Feb 12 15:22:27 2018 From: lveyde at redhat.com (Lev Veyde) Date: Mon, 12 Feb 2018 17:22:27 +0200 Subject: [ovirt-users] [ANN] oVirt 4.2.1 Release is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.1 Release, as of February 12th, 2018 This update is a release of the first in a series of stabilization updates to the 4.2 series. This release is available now for: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is already available - oVirt Node is already available [2] Additional Resources: * Read more about the oVirt 4.2.1 release highlights: http://www.ovirt.org/release/4. 2 . 1 / * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4. 2 . 1 / [2] http://resources.ovirt.org/pub/ovirt-4. 2 /iso/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From philipp.richter at linforge.com Mon Feb 12 15:25:18 2018 From: philipp.richter at linforge.com (Philipp Richter) Date: Mon, 12 Feb 2018 16:25:18 +0100 (CET) Subject: [ovirt-users] CentOS 7 Hyperconverged oVirt 4.2 with Self-Hosted-Engine with glusterfs with 2 Hypervisors and 1 glusterfs-Arbiter-only Message-ID: <544259669.37244.1518449118807.JavaMail.zimbra@linforge.com> Hi, I'm trying to install oVirt 4.2 as 2-Node Hyperconverged System based on glusterfs. A third node should be used as glusterfs arbiter Node and to provide Quorum for the Cluster. The third node is a small PCEngines APU2 Host, so it is not usable as Hypervisor. My question is: Is this kind of setup possible? What is the best way to install a cluster like this one? Thanks, -- : Philipp Richter : LINFORGE | Peace of mind for your IT : : T: +43 1 890 79 99 : E: philipp.richter at linforge.com : https://www.xing.com/profile/Philipp_Richter15 : https://www.linkedin.com/in/philipp-richter : : LINFORGE Technologies GmbH : Brehmstra?e 10 : 1110 Wien : ?sterreich : : Firmenbuchnummer: FN 216034y : USt.- Nummer : ATU53054901 : Gerichtsstand: Wien : : LINFORGE? is a registered trademark of LINFORGE, Austria. From msivak at redhat.com Mon Feb 12 15:45:16 2018 From: msivak at redhat.com (Martin Sivak) Date: Mon, 12 Feb 2018 16:45:16 +0100 Subject: [ovirt-users] CentOS 7 Hyperconverged oVirt 4.2 with Self-Hosted-Engine with glusterfs with 2 Hypervisors and 1 glusterfs-Arbiter-only In-Reply-To: <544259669.37244.1518449118807.JavaMail.zimbra@linforge.com> References: <544259669.37244.1518449118807.JavaMail.zimbra@linforge.com> Message-ID: Hi, this should work according to my gluster colleagues. The recommended way to install this would be by using one of the "full" nodes and deploying hosted engine via cockpit there. The gdeploy plugin in cockpit should allow you to configure the arbiter node. The documentation for deploying RHHI (hyper converged RH product) is here: https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.1/html-single/deploying_red_hat_hyperconverged_infrastructure/index#deploy Best regards Martin Sivak On Mon, Feb 12, 2018 at 4:25 PM, Philipp Richter wrote: > Hi, > > I'm trying to install oVirt 4.2 as 2-Node Hyperconverged System based on glusterfs. > A third node should be used as glusterfs arbiter Node and to provide Quorum for the Cluster. > The third node is a small PCEngines APU2 Host, so it is not usable as Hypervisor. > > My question is: Is this kind of setup possible? > What is the best way to install a cluster like this one? > > Thanks, > -- > > : Philipp Richter > : LINFORGE | Peace of mind for your IT > : > : T: +43 1 890 79 99 > : E: philipp.richter at linforge.com > : https://www.xing.com/profile/Philipp_Richter15 > : https://www.linkedin.com/in/philipp-richter > : > : LINFORGE Technologies GmbH > : Brehmstra?e 10 > : 1110 Wien > : ?sterreich > : > : Firmenbuchnummer: FN 216034y > : USt.- Nummer : ATU53054901 > : Gerichtsstand: Wien > : > : LINFORGE? is a registered trademark of LINFORGE, Austria. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From gianluca.cecchi at gmail.com Mon Feb 12 16:43:21 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Mon, 12 Feb 2018 17:43:21 +0100 Subject: [ovirt-users] [ANN] oVirt 4.2.1 Release is now available In-Reply-To: References: Message-ID: On Mon, Feb 12, 2018 at 4:22 PM, Lev Veyde wrote: > The oVirt Project is pleased to announce the availability of the oVirt 4.2 > .1 Release, as of February 12th, 2018 > > This update is a release of the first in a series of stabilization > updates to the 4.2 > series. > > This release is available now for: > * Red Hat Enterprise Linux 7.4 or later > * CentOS Linux (or similar) 7.4 or later > > This release supports Hypervisor Hosts running: > * Red Hat Enterprise Linux 7.4 or later > * CentOS Linux (or similar) 7.4 or later > * oVirt Node 4.2 > > Hello, could you confirm that for plain CentOS 7.4 hosts there was no changes between rc3 and final 4.2.1? I just updated an environment that was in RC3 and while the engine has been updated, the host says no updates: [root at ov42 ~]# yum update Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos Loading mirror speeds from cached hostfile * base: artfiles.org * extras: ba.mirror.garr.it * ovirt-4.2: ftp.nluug.nl * ovirt-4.2-epel: epel.besthosting.ua * updates: ba.mirror.garr.it No packages marked for update [root at ov42 ~]# The mirrors seem the same that the engine has used some minutes before, so they should be ok... engine packages passed from ovirt-engine-4.2.1.4-1.el7.centos.noarch to ovirt-engine-4.2.1.6-1.el7.centos.noarch Base oVirt related packages on host are currently of this type, since 4.2.1rc3: libvirt-daemon-3.2.0-14.el7_4.7.x86_64 ovirt-host-4.2.1-1.el7.centos.x86_64 ovirt-vmconsole-1.0.4-1.el7.noarch qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64 sanlock-3.5.0-1.el7.x86_64 vdsm-4.20.17-1.el7.centos.x86_64 virt-v2v-1.36.3-6.el7_4.3.x86_64 Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonbae77 at gmail.com Mon Feb 12 16:57:03 2018 From: jonbae77 at gmail.com (Jon bae) Date: Mon, 12 Feb 2018 17:57:03 +0100 Subject: [ovirt-users] Fwd: Can't add/remove usb device to VM In-Reply-To: References: Message-ID: A little update: I found out that I have to activate *Hostdev Passthrough* (is this really necessary for USB devices?). Now I see all the devices and I can add also an USB device. But I'm still not able to disconnect the USB from the old VM. ---------- Forwarded message ---------- From: Jon bae Date: 2018-02-12 16:14 GMT+01:00 Subject: Can't add/remove usb device to VM To: users Hello, I run oVirt 4.2.1, in 4.1 I added a USB device to a VM and now I wanted to change this device to a different VM. But I found out, that I'm not able to remove this device nor I'm able to add any device to any VM. The device list is empty. I found a bug report, what describe this error: https://bugzilla.redhat.com/show_bug.cgi?id=1531847 Is there a solution for that? The USB device is a hardware dongle and it is very impotent for us to change this. Any workaround is welcome! Regards Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Mon Feb 12 17:14:37 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Mon, 12 Feb 2018 18:14:37 +0100 Subject: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.2.1 Release is now available In-Reply-To: References: Message-ID: 2018-02-12 17:43 GMT+01:00 Gianluca Cecchi : > On Mon, Feb 12, 2018 at 4:22 PM, Lev Veyde wrote: > >> The oVirt Project is pleased to announce the availability of the oVirt 4. >> 2.1 Release, as of February 12th, 2018 >> >> This update is a release of the first in a series of stabilization >> updates to the 4.2 >> series. >> >> This release is available now for: >> * Red Hat Enterprise Linux 7.4 or later >> * CentOS Linux (or similar) 7.4 or later >> >> This release supports Hypervisor Hosts running: >> * Red Hat Enterprise Linux 7.4 or later >> * CentOS Linux (or similar) 7.4 or later >> * oVirt Node 4.2 >> >> > Hello, > could you confirm that for plain CentOS 7.4 hosts there was no changes > between rc3 and final 4.2.1? > We had 3 RC after RC3, last one was RC6 https://lists.ovirt.org/pipermail/announce/2018-February/000387.html > I just updated an environment that was in RC3 and while the engine has > been updated, the host says no updates: > > [root at ov42 ~]# yum update > Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos > Loading mirror speeds from cached hostfile > * base: artfiles.org > * extras: ba.mirror.garr.it > * ovirt-4.2: ftp.nluug.nl > * ovirt-4.2-epel: epel.besthosting.ua > * updates: ba.mirror.garr.it > No packages marked for update > I think mirrors are still syncing, but resources.ovirt.org is updated. You can switch from mirrorlist to baseurl in your yum config file if you don't want to wait for the mirror to finish the sync. > [root at ov42 ~]# > > The mirrors seem the same that the engine has used some minutes before, so > they should be ok... > > engine packages passed from ovirt-engine-4.2.1.4-1.el7.centos.noarch > to ovirt-engine-4.2.1.6-1.el7.centos.noarch > > Base oVirt related packages on host are currently of this type, since > 4.2.1rc3: > > libvirt-daemon-3.2.0-14.el7_4.7.x86_64 > ovirt-host-4.2.1-1.el7.centos.x86_64 > ovirt-vmconsole-1.0.4-1.el7.noarch > qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64 > sanlock-3.5.0-1.el7.x86_64 > vdsm-4.20.17-1.el7.centos.x86_64 > virt-v2v-1.36.3-6.el7_4.3.x86_64 > > yes, most of the changes in the last 3 rcs were related to cockpit-ovirt / ovirt-node / hosted engine $ cat ovirt-4.2.1_rc4.conf ovirt-4.2.1_rc5.conf ovirt-4.2.1_rc6.conf ovirt-4.2.1.conf # ovirt-engine-4.2.1.4 http://jenkins.ovirt.org/job/ovirt-engine_master_build-artifacts-el7-x86_64/6483/ # cockpit-ovirt-0.11.7-0.1 http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifacts-el7-x86_64/84/ # ovirt-release42-4.2.1_rc4 http://jenkins.ovirt.org/job/ovirt-release_master_build-artifacts-el7-x86_64/649/ # otopi-1.7.7 http://jenkins.ovirt.org/job/otopi_4.2_build-artifacts-el7-x86_64/2/ # ovirt-host-deploy-1.7.2 http://jenkins.ovirt.org/job/ovirt-host-deploy_4.2_build-artifacts-el7-x86_64/4/ # ovirt-hosted-engine-setup-2.2.9 http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_4.2_build-artifacts-el7-x86_64/5/ # cockpit-ovirt-0.11.11-0.1 http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifacts-el7-x86_64/96/ # ovirt-release42-4.2.1_rc5 http://jenkins.ovirt.org/job/ovirt-release_master_build-artifacts-el7-x86_64/651/ # ovirt-engine-appliance-4.2-20180202.1 http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_build-artifacts-el7-x86_64/62/ # ovirt-node-ng-4.2.0-0.20180205.0 http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2-pre_build-artifacts-el7-x86_64/212/ # ovirt-engine-4.2.1.5 http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifacts-el7-x86_64/3/ # ovirt-engine-4.2.1.6 http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifacts-el7-x86_64/12/ # ovirt-release42-4.2.1_rc6 http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifacts-el7-x86_64/164/ # ovirt-release42-4.2.1 http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifacts-el7-x86_64/173/ > Thanks, > Gianluca > > _______________________________________________ > Announce mailing list > Announce at ovirt.org > http://lists.ovirt.org/mailman/listinfo/announce > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at bootc.boo.tc Mon Feb 12 21:08:13 2018 From: lists at bootc.boo.tc (Chris Boot) Date: Mon, 12 Feb 2018 21:08:13 +0000 Subject: [ovirt-users] VM is down with error: Bad volume specification In-Reply-To: References: <2143974132.381.1516707141162@ovirt.boo.tc> Message-ID: Hi folks, Sorry it's taken me a while to get back to this! I've just updated to 4.2.1 and am still seeing the issue. I've collected the vdsm.log and supervdsm.log from my SPM host after trying to start my broken VM: they are available from: https://www.dropbox.com/sh/z3i3guveutusdv9/AABDN6ubTQN6JhNOrVZhrA1Qa?dl=0 Thanks, Chris On 23/01/18 11:55, Chris Boot wrote: > Hi all, > > I'm running oVirt 4.2.0 and have been using oVirtBackup with it. So far > it has been working fine, until this morning. Once of my VMs seems to > have had a snapshot created that I can't delete. > > I noticed when the VM failed to migrate to my other hosts, so I just > shut it down to allow the host to go into maintenance. Now I can't start > the VM with the snapshot nor can I delete the snapshot. > > Please let me know what further information you need to help me diagnose > the issue and recover the VM. > > Best regards, > Chris > > -------- Forwarded Message -------- > Subject: alertMessage (ovirt.boo.tc), [VM morse is down with error. Exit > message: Bad volume specification {'address': {'bus': '0', 'controller': > '0', 'type': 'drive', 'target': '0', 'unit': '0'}, 'serial': > 'ec083085-52c1-4da5-88cf-4af02e42a212', 'index': 0, 'iface': 'scsi', > 'apparentsize': '12386304', 'cache': 'none', 'imageID': > 'ec083085-52c1-4da5-88cf-4af02e42a212', 'truesize': '12386304', 'type': > 'file', 'domainID': '23372fb9-51a5-409f-ae21-2521012a83fd', 'reqsize': > '0', 'format': 'cow', 'poolID': '00000001-0001-0001-0001-000000000311', > 'device': 'disk', 'path': > '/rhev/data-center/00000001-0001-0001-0001-000000000311/23372fb9-51a5-409f-ae21-2521012a83fd/images/ec083085-52c1-4da5-88cf-4af02e42a212/aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', > 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID': > 'aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', 'diskType': 'file', > 'specParams': {}, 'discard': True}.] > Date: Tue, 23 Jan 2018 11:32:21 +0000 (GMT) > From: engine at ovirt.boo.tc > To: bootc at bootc.net > > Time:2018-01-23 11:30:39.677 > Message:VM morse is down with error. Exit message: Bad volume > specification {'address': {'bus': '0', 'controller': '0', 'type': > 'drive', 'target': '0', 'unit': '0'}, 'serial': > 'ec083085-52c1-4da5-88cf-4af02e42a212', 'index': 0, 'iface': 'scsi', > 'apparentsize': '12386304', 'cache': 'none', 'imageID': > 'ec083085-52c1-4da5-88cf-4af02e42a212', 'truesize': '12386304', 'type': > 'file', 'domainID': '23372fb9-51a5-409f-ae21-2521012a83fd', 'reqsize': > '0', 'format': 'cow', 'poolID': '00000001-0001-0001-0001-000000000311', > 'device': 'disk', 'path': > '/rhev/data-center/00000001-0001-0001-0001-000000000311/23372fb9-51a5-409f-ae21-2521012a83fd/images/ec083085-52c1-4da5-88cf-4af02e42a212/aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', > 'propagateErrors': 'off', 'name': 'sda', 'bootOrder': '1', 'volumeID': > 'aa10d05b-f2f0-483e-ab43-7c03a86cd6ab', 'diskType': 'file', > 'specParams': {}, 'discard': True}. > Severity:ERROR > VM Name: morse > Host Name: ovirt2.boo.tc > Template Name: Blank > -- Chris Boot bootc at boo.tc From ovirt at a.inyourendo.net Tue Feb 13 00:09:59 2018 From: ovirt at a.inyourendo.net (Tim Thompson) Date: Mon, 12 Feb 2018 16:09:59 -0800 Subject: [ovirt-users] Defining custom network filter or editing existing Message-ID: <1f992c8b-c11e-06a2-07ab-b29361b271c4@a.inyourendo.net> All, I was wondering if someone can point me in the direction of the documentation related to defining custom network filters (nwfilter) in 4.2. I found the docs on assigning a network filter to a vNIC profile, but I cannot find any mention of how you can create your own. Normally you'd use 'virst nwfilter-define', but that is locked out since vdsm manages everything. I need to expand clean-traffic's scope to include ipv6, since it doesn't handle ipv6 at all by default, it seems. Thanks, -Tim From reznikov_aa at soskol.com Tue Feb 13 06:43:10 2018 From: reznikov_aa at soskol.com (Reznikov Alexei) Date: Tue, 13 Feb 2018 09:43:10 +0300 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf In-Reply-To: <7004f464280bca707b1b0912dcf07988@soskol.com> References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> <7004f464280bca707b1b0912dcf07988@soskol.com> Message-ID: <78fc3942-87cf-da85-d35d-e20fb958f5ff@soskol.com> 10.02.2018 00:48, reznikov_aa at soskol.com ?????: > Simone Tiraboschi ????? 2018-02-09 15:17: > >> It shouldn't happen. >> I suspect that something went wrong creating the configuration volume >> on the shared storage at the end of the deployment. >> >> Alexei, can both of you attach you hosted-engine-setup logs? >> Can you please check what happens on >> ? hosted-engine --get-shared-config gateway >> >> Thanks >> > > Simone, my ovirt cluster upgrade from 3.4... and i have too old logs. > > I'm confused by the execution of the hosted-engine --get-shared-config > gateway ... > I get the output "gateway: 10.245.183.1, type: he_conf", but my > current hosted-engine.conf is overwritten by the other > hosted-engine.conf. > > old file: > > fqdn = eng.lan > vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749 > vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228 > storage = ssd.lan:/ovirt > service_start_time = 0 > host_id = 3 > console = vnc > domainType = nfs3 > sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763 > connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4 > ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem > ca_subject = "C = EN, L = Test, O = Test, CN = Test" > vdsm_use_ssl = true > gateway = 10.245.183.1 > bridge = ovirtmgmt > metadata_volume_UUID = > metadata_image_UUID = > lockspace_volume_UUID = > lockspace_image_UUID = > > The following are used only for iSCSI storage > iqn = > portal = > user = > password = > port = > > conf_volume_UUID = a20d9700-1b9a-41d8-bb4b-f2b7c168104f > conf_image_UUID = b5f353f5-9357-4aad-b1a3-751d411e6278 > conf = /var/run/ovirt-hosted-engine-ha/vm.conf > vm_disk_vol_id = cd12a59e-7d84-4b4e-98c7-4c68e83ecd7b > spUUID = 00000000-0000-0000-0000-000000000000 > > new rewrite file > > fqdn = eng.lan > vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749 > vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228 > storage = ssd.lan:/ovirt > conf = /etc/ovirt-hosted-engine/vm.conf > service_start_time = 0 > host_id = 3 > console = vnc > domainType = nfs3 > spUUID = 036f83d7-39f7-48fd-a73a-3c9ffb3dbe6a > sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763 > connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4 > ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem > ca_subject = "C = EN, L = Test, O = Test, CN = Test" > vdsm_use_ssl = true > gateway = 10.245.183.1 > bridge = ovirtmgmt > metadata_volume_UUID = > metadata_image_UUID = > lockspace_volume_UUID = > lockspace_image_UUID = > > The following are used only for iSCSI storage > iqn = > portal = > user = > password = > port = > > And this in all hosts in cluster! > It seems to me that these are some remnants of versions 3.4, 3.5 ... > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users BUMP I resolved error "KeyError: 'Configuration value not found: file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway'". This error was caused... "/VDSGenericException: VDSErrorException: received downloaded data size is wrong (requested 20480, received 10240)/", the solution is here https://access.redhat.com/solutions/3106231 But in my case there is still a problem with the inappropriate parameters in hosted-engine.conf ... I think I should use "hosted-engine --set-shared-config" to change the values on the shared storage. This is right? Guru help to solve this. Regards, Alex. -------------- next part -------------- An HTML attachment was scrubbed... URL: From omachace at redhat.com Tue Feb 13 08:11:19 2018 From: omachace at redhat.com (Ondra Machacek) Date: Tue, 13 Feb 2018 09:11:19 +0100 Subject: [ovirt-users] 4.2 aaa LDAP setup issue In-Reply-To: <776DB316-C6A5-4A64-88CA-88A92AE5F7B7@squaretrade.com> References: <776DB316-C6A5-4A64-88CA-88A92AE5F7B7@squaretrade.com> Message-ID: Hello, On 02/09/2018 08:17 PM, Jamie Lawrence wrote: > Hello, > > I'm bringing up a new 4.2 cluster and would like to use LDAP auth. Our LDAP servers are fine and function normally for a number of other services, but I can't get this working. > > Our LDAP setup requires startTLS and a login. That last bit seems to be where the trouble is. After ovirt-engine-extension-aaa-ldap-setup asks for the cert and I pass it the path to the same cert used via nslcd/PAM for logging in to the host, it replies: > > [ INFO ] Connecting to LDAP using 'ldap://x.squaretrade.com:389' > [ INFO ] Executing startTLS > [WARNING] Cannot connect using 'ldap://x.squaretrade.com:389': {'info': 'authentication required', 'desc': 'Server is unwilling to perform'} > [ ERROR ] Cannot connect using any of available options > > "Unwilling to perform" makes me think -aaa-ldap-setup is trying something the backend doesn't support, but I'm having trouble guessing what that could be since the tool hasn't gathered sufficient information to connect yet - it asks for a DN/pass later in the script. And the log isn't much more forthcoming. > > I double-checked the cert with openssl; it is a valid, PEM-encoded cert. > > Before I head in to the code, has anyone seen this? Looks like you have disallowed anonymous bind on your LDAP. We are trying to estabilish anonymous bind to test the connection. I would recommend to try to do a manual configuration, the documentation is here: https://github.com/oVirt/ovirt-engine-extension-aaa-ldap/blob/master/README#L17 Then in your /etc/ovirt-engine/aaa/profile1.properties add following line: pool.default.auth.type = simple Then test the configuration using ovirt-engine-extensions-tool. If it's OK just restart ovirt-engine and all should be fine. > > Thanks, > > -j > > - - - - snip - - - - > > Relevant log details: > > 2018-02-08 15:15:08,625-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._getURLs:281 URLs: ['ldap://x.squaretrade.com:389'] > 2018-02-08 15:15:08,626-0800 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:391 Connecting to LDAP using 'ldap://x.squaretrade.com:389' > 2018-02-08 15:15:08,627-0800 INFO otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:442 Executing startTLS > 2018-02-08 15:15:08,640-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:445 Perform search > 2018-02-08 15:15:08,641-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:459 Exception > Traceback (most recent call last): > File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 451, in _connectLDAP > timeout=60, > File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 555, in search_st > return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout) > File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 546, in search_ext_s > return self.result(msgid,all=1,timeout=timeout)[1] > File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 458, in result > resp_type, resp_data, resp_msgid = self.result2(msgid,all,timeout) > File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 462, in result2 > resp_type, resp_data, resp_msgid, resp_ctrls = self.result3(msgid,all,timeout) > File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 469, in result3 > resp_ctrl_classes=resp_ctrl_classes > File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 476, in result4 > ldap_result = self._ldap_call(self._l.result4,msgid,all,timeout,add_ctrls,add_intermediates,add_extop) > File "/usr/lib64/python2.7/site-packages/ldap/ldapobject.py", line 99, in _ldap_call > result = func(*args,**kwargs) > UNWILLING_TO_PERFORM: {'info': 'authentication required', 'desc': 'Server is unwilling to perform'} > 2018-02-08 15:15:08,642-0800 WARNING otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._connectLDAP:463 Cannot connect using 'ldap://x.squaretrade.com:389': {'info': 'authentication required', 'desc': 'Server is unwilling to perform'} > 2018-02-08 15:15:08,643-0800 ERROR otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._customization_late:787 Cannot connect using any of available options > 2018-02-08 15:15:08,644-0800 DEBUG otopi.plugins.ovirt_engine_extension_aaa_ldap.ldap.common common._customization_late:788 Exception > Traceback (most recent call last): > File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 782, in _customization_late > insecure=insecure, > File "/usr/share/ovirt-engine-extension-aaa-ldap/setup/bin/../plugins/ovirt-engine-extension-aaa-ldap/ldap/common.py", line 468, in _connectLDAP > _('Cannot connect using any of available options') > SoftRuntimeError: Cannot connect using any of available options > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From dholler at redhat.com Tue Feb 13 08:24:02 2018 From: dholler at redhat.com (Dominik Holler) Date: Tue, 13 Feb 2018 09:24:02 +0100 Subject: [ovirt-users] Defining custom network filter or editing existing In-Reply-To: <1f992c8b-c11e-06a2-07ab-b29361b271c4@a.inyourendo.net> References: <1f992c8b-c11e-06a2-07ab-b29361b271c4@a.inyourendo.net> Message-ID: <20180213092402.1c19db0a@t460p> On Mon, 12 Feb 2018 16:09:59 -0800 Tim Thompson wrote: > All, > > I was wondering if someone can point me in the direction of the > documentation related to defining custom network filters (nwfilter) > in 4.2. I found the docs on assigning a network filter to a vNIC > profile, but I cannot find any mention of how you can create your > own. Normally you'd use 'virst nwfilter-define', but that is locked > out since vdsm manages everything. I need to expand clean-traffic's > scope to include ipv6, since it doesn't handle ipv6 at all by > default, it seems. > Custom network filters are not supported. If you still want to use custom network filters, you would have to: - add custom network properties on oVirt-engine level, - add a hook like vdsm_hooks/noipspoof/noipspoof.py which modifies libvirt's domain XML to activate the custom network filter and - be yourself responsible to deploy the custom network filter definition to all nodes From mburman at redhat.com Tue Feb 13 08:49:46 2018 From: mburman at redhat.com (Michael Burman) Date: Tue, 13 Feb 2018 10:49:46 +0200 Subject: [ovirt-users] Defining custom network filter or editing existing In-Reply-To: <20180213092402.1c19db0a@t460p> References: <1f992c8b-c11e-06a2-07ab-b29361b271c4@a.inyourendo.net> <20180213092402.1c19db0a@t460p> Message-ID: Thanks Dominik, Just spoke with Dan and we decided to open a RFE to add an option to set a custom nwfilter in the engine UI See - https://bugzilla.redhat.com/show_bug.cgi?id=1544666 Cheers) On Tue, Feb 13, 2018 at 10:24 AM, Dominik Holler wrote: > On Mon, 12 Feb 2018 16:09:59 -0800 > Tim Thompson wrote: > > > All, > > > > I was wondering if someone can point me in the direction of the > > documentation related to defining custom network filters (nwfilter) > > in 4.2. I found the docs on assigning a network filter to a vNIC > > profile, but I cannot find any mention of how you can create your > > own. Normally you'd use 'virst nwfilter-define', but that is locked > > out since vdsm manages everything. I need to expand clean-traffic's > > scope to include ipv6, since it doesn't handle ipv6 at all by > > default, it seems. > > > > Custom network filters are not supported. > If you still want to use custom network filters, you would have to: > - add custom network properties on oVirt-engine level, > - add a hook like vdsm_hooks/noipspoof/noipspoof.py which modifies > libvirt's domain XML to activate the custom network filter and > - be yourself responsible to deploy the custom network filter > definition to all nodes > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From kwolf at redhat.com Tue Feb 13 09:41:12 2018 From: kwolf at redhat.com (Kevin Wolf) Date: Tue, 13 Feb 2018 10:41:12 +0100 Subject: [ovirt-users] [Qemu-block] qcow2 images corruption In-Reply-To: References: Message-ID: <20180213094111.GB5083@localhost.localdomain> Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben: > TL; DR : qcow2 images keep getting corrupted. Any workaround? Not without knowing the cause. The first thing to make sure is that the image isn't touched by a second process while QEMU is running a VM. The classic one is using 'qemu-img snapshot' on the image of a running VM, which is instant corruption (and newer QEMU versions have locking in place to prevent this), but we have seen more absurd cases of things outside QEMU tampering with the image when we were investigating previous corruption reports. This covers the majority of all reports, we haven't had a real corruption caused by a QEMU bug in ages. > After having found (https://access.redhat.com/solutions/1173623) the right > logical volume hosting the qcow2 image, I can run qemu-img check on it. > - On 80% of my VMs, I find no errors. > - On 15% of them, I find Leaked cluster errors that I can correct using > "qemu-img check -r all" > - On 5% of them, I find Leaked clusters errors and further fatal errors, > which can not be corrected with qemu-img. > In rare cases, qemu-img can correct them, but destroys large parts of the > image (becomes unusable), and on other cases it can not correct them at all. It would be good if you could make the 'qemu-img check' output available somewhere. It would be even better if we could have a look at the respective image. I seem to remember that John (CCed) had a few scripts to analyse corrupted qcow2 images, maybe we would be able to see something there. > What I read similar to my case is : > - usage of qcow2 > - heavy disk I/O > - using the virtio-blk driver > > In the proxmox thread, they tend to say that using virtio-scsi is the > solution. Having asked this question to oVirt experts > (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's > not clear the driver is to blame. This seems very unlikely. The corruption you're seeing is in the qcow2 metadata, not only in the guest data. If anything, virtio-scsi exercises more qcow2 code paths than virtio-blk, so any potential bug that affects virtio-blk should also affect virtio-scsi, but not the other way around. > I agree with the answer Yaniv Kaul gave to me, saying I have to properly > report the issue, so I'm longing to know which peculiar information I can > give you now. To be honest, debugging corruption after the fact is pretty hard. We'd need the 'qemu-img check' output and ideally the image to do anything, but I can't promise that anything would come out of this. Best would be a reproducer, or at least some operation that you can link to the appearance of the corruption. Then we could take a more targeted look at the respective code. > As you can imagine, all this setup is in production, and for most of the > VMs, I can not "play" with them. Moreover, we launched a campaign of nightly > stopping every VM, qemu-img check them one by one, then boot. > So it might take some time before I find another corrupted image. > (which I'll preciously store for debug) > > Other informations : We very rarely do snapshots, but I'm close to imagine > that automated migrations of VMs could trigger similar behaviors on qcow2 > images. To my knowledge, oVirt only uses external snapshots and creates them with QMP. This should be perfectly safe because from the perspective of the qcow2 image being snapshotted, it just means that it gets no new write requests. Migration is something more involved, and if you could relate the problem to migration, that would certainly be something to look into. In that case, it would be important to know more about the setup, e.g. is it migration with shared or non-shared storage? > Last point about the versions we use : yes that's old, yes we're planning to > upgrade, but we don't know when. That would be helpful, too. Nothing is more frustrating that debugging a bug in an old version only to find that it's already fixed in the current version (well, except maybe debugging and finding nothing). Kevin From s.danzi at hawai.it Tue Feb 13 09:51:57 2018 From: s.danzi at hawai.it (Stefano Danzi) Date: Tue, 13 Feb 2018 10:51:57 +0100 Subject: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1 Message-ID: Hello! In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM. Hosted engine starts regularly. I have a sigle host with Hosted Engine. Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz When I start any VM I get this error: "The CPU type of the cluster is unknown. Its possible to change the cluster cpu or set a different one per VM." All VMs have " Guest CPU Type: N/D" Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu type before the upgrade), my CPU should be Ivy Bridge but it isn't in the dropdown list. If I try to select a similar cpu (SandyBridge IBRS) I get an error. I can't chage cluster cpu type when I have running hosts with a lower CPU type. I can't put host in maintenance because? hosted engine is running on it. How I can solve? From stirabos at redhat.com Tue Feb 13 10:28:39 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Tue, 13 Feb 2018 11:28:39 +0100 Subject: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1 In-Reply-To: References: Message-ID: Ciao Stefano, we have to properly indagate this: thanks for the report. Can you please attach from your host the output of - grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf - vdsm-client Host getCapabilities Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1 upgrade? On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi wrote: > Hello! > > In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM. > Hosted engine starts regularly. > > I have a sigle host with Hosted Engine. > > Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz > > When I start any VM I get this error: "The CPU type of the cluster is > unknown. Its possible to change the cluster cpu or set a different one per > VM." > > All VMs have " Guest CPU Type: N/D" > > Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu type > before the upgrade), my CPU should be Ivy Bridge but it isn't in the > dropdown list. > > If I try to select a similar cpu (SandyBridge IBRS) I get an error. I > can't chage cluster cpu type when I have running hosts with a lower CPU > type. > I can't put host in maintenance because hosted engine is running on it. > > How I can solve? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Tue Feb 13 10:40:53 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Tue, 13 Feb 2018 11:40:53 +0100 Subject: [ovirt-users] leftover of disk moving operation In-Reply-To: References: Message-ID: On Wed, Jan 31, 2018 at 5:01 PM, Elad Ben Aharon wrote: > Just delete the image directory (remove_me_8eb435f3-e8c1-4042-8180-e9f342b2e449) > located under /rhev/data-center/%spuuid%/%sduuid%/images/ > > As for the LV, please try the following: > > dmsetup remove /dev/mapper/%device_name% --> device name could be fetched > by 'dmsetup table' > Hello, for that oVirt environment I finished moving the disks form source to target, so I could power off all test infra and at node reboot I didn't have the problem again (also because I force removed the source storage domain), so I could not investigate more. But I have "sort of" reproduced the problem inside another FC SAN storage based environment. The problem happened with a VM having 4 disks: one boot disk of 50Gb and other 3 disks of 100Gb, 200Gb, 200Gb. The VM has been powered off and the 3 "big" disks deletion (tried both deactivating and not the disk before removal) originated for all the same error as in my oVirt environment above during move: command HSMGetAllTasksStatusesVDS failed: Cannot remove Logical Volume: ([' Cannot remove Logical Volume: So I think the problem is related to SAN itself and when you work with relatively "big" disks perhaps. One suspect is also a problem at hypervisor LVM filtering, because all 3 disks had a PV/VG/LV structure inside, created on the whole virtual disk at VM level. As this new environment is in RHEV with RHV-H hosts (layer rhvh-4.1-0.20171002.0+1) I opened the case #02034032 if interested. The big problem is that the disk has been removed at VM side, but at storage domain side the space has not been released, so that if you have to create other "big" disks, you could go into lack of space because of this. Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Feb 13 10:42:18 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 13 Feb 2018 12:42:18 +0200 Subject: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.2.1 Release is now available In-Reply-To: References: Message-ID: Hi all, Is this version considered production ready? Thanx, Alex On Mon, Feb 12, 2018 at 7:14 PM, Sandro Bonazzola wrote: > > > 2018-02-12 17:43 GMT+01:00 Gianluca Cecchi : > >> On Mon, Feb 12, 2018 at 4:22 PM, Lev Veyde wrote: >> >>> The oVirt Project is pleased to announce the availability of the oVirt 4. >>> 2.1 Release, as of February 12th, 2018 >>> >>> This update is a release of the first in a series of stabilization >>> updates to the 4.2 >>> series. >>> >>> This release is available now for: >>> * Red Hat Enterprise Linux 7.4 or later >>> * CentOS Linux (or similar) 7.4 or later >>> >>> This release supports Hypervisor Hosts running: >>> * Red Hat Enterprise Linux 7.4 or later >>> * CentOS Linux (or similar) 7.4 or later >>> * oVirt Node 4.2 >>> >>> >> Hello, >> could you confirm that for plain CentOS 7.4 hosts there was no changes >> between rc3 and final 4.2.1? >> > > We had 3 RC after RC3, last one was RC6 https://lists.ovirt.org/ > pipermail/announce/2018-February/000387.html > > > >> I just updated an environment that was in RC3 and while the engine has >> been updated, the host says no updates: >> >> [root at ov42 ~]# yum update >> Loaded plugins: fastestmirror, langpacks, product-id, >> search-disabled-repos >> Loading mirror speeds from cached hostfile >> * base: artfiles.org >> * extras: ba.mirror.garr.it >> * ovirt-4.2: ftp.nluug.nl >> * ovirt-4.2-epel: epel.besthosting.ua >> * updates: ba.mirror.garr.it >> No packages marked for update >> > > I think mirrors are still syncing, but resources.ovirt.org is updated. > You can switch from mirrorlist to baseurl in your yum config file if you > don't want to wait for the mirror to finish the sync. > > > >> [root at ov42 ~]# >> >> The mirrors seem the same that the engine has used some minutes before, >> so they should be ok... >> >> engine packages passed from ovirt-engine-4.2.1.4-1.el7.centos.noarch >> to ovirt-engine-4.2.1.6-1.el7.centos.noarch >> >> Base oVirt related packages on host are currently of this type, since >> 4.2.1rc3: >> >> libvirt-daemon-3.2.0-14.el7_4.7.x86_64 >> ovirt-host-4.2.1-1.el7.centos.x86_64 >> ovirt-vmconsole-1.0.4-1.el7.noarch >> qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64 >> sanlock-3.5.0-1.el7.x86_64 >> vdsm-4.20.17-1.el7.centos.x86_64 >> virt-v2v-1.36.3-6.el7_4.3.x86_64 >> >> > yes, most of the changes in the last 3 rcs were related to cockpit-ovirt / > ovirt-node / hosted engine > > $ cat ovirt-4.2.1_rc4.conf ovirt-4.2.1_rc5.conf ovirt-4.2.1_rc6.conf > ovirt-4.2.1.conf > > # ovirt-engine-4.2.1.4 > http://jenkins.ovirt.org/job/ovirt-engine_master_build- > artifacts-el7-x86_64/6483/ > > # cockpit-ovirt-0.11.7-0.1 > http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build- > artifacts-el7-x86_64/84/ > > # ovirt-release42-4.2.1_rc4 > http://jenkins.ovirt.org/job/ovirt-release_master_build- > artifacts-el7-x86_64/649/ > # otopi-1.7.7 > http://jenkins.ovirt.org/job/otopi_4.2_build-artifacts-el7-x86_64/2/ > > # ovirt-host-deploy-1.7.2 > http://jenkins.ovirt.org/job/ovirt-host-deploy_4.2_build- > artifacts-el7-x86_64/4/ > > # ovirt-hosted-engine-setup-2.2.9 > http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_4.2_ > build-artifacts-el7-x86_64/5/ > > # cockpit-ovirt-0.11.11-0.1 > http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build- > artifacts-el7-x86_64/96/ > > # ovirt-release42-4.2.1_rc5 > http://jenkins.ovirt.org/job/ovirt-release_master_build- > artifacts-el7-x86_64/651/ > > # ovirt-engine-appliance-4.2-20180202.1 > http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_ > build-artifacts-el7-x86_64/62/ > > # ovirt-node-ng-4.2.0-0.20180205.0 > http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2-pre_ > build-artifacts-el7-x86_64/212/ > > # ovirt-engine-4.2.1.5 > http://jenkins.ovirt.org/job/ovirt-engine_4.2_build- > artifacts-el7-x86_64/3/ > > # ovirt-engine-4.2.1.6 > http://jenkins.ovirt.org/job/ovirt-engine_4.2_build- > artifacts-el7-x86_64/12/ > > # ovirt-release42-4.2.1_rc6 > http://jenkins.ovirt.org/job/ovirt-release_4.2_build- > artifacts-el7-x86_64/164/ > > # ovirt-release42-4.2.1 > http://jenkins.ovirt.org/job/ovirt-release_4.2_build- > artifacts-el7-x86_64/173/ > > > >> Thanks, >> Gianluca >> >> _______________________________________________ >> Announce mailing list >> Announce at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/announce >> >> > > > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA > > TRIED. TESTED. TRUSTED. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Tue Feb 13 10:42:54 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Tue, 13 Feb 2018 11:42:54 +0100 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf In-Reply-To: <78fc3942-87cf-da85-d35d-e20fb958f5ff@soskol.com> References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> <7004f464280bca707b1b0912dcf07988@soskol.com> <78fc3942-87cf-da85-d35d-e20fb958f5ff@soskol.com> Message-ID: On Tue, Feb 13, 2018 at 7:43 AM, Reznikov Alexei wrote: > 10.02.2018 00:48, reznikov_aa at soskol.com ?????: > > Simone Tiraboschi ????? 2018-02-09 15:17: > > It shouldn't happen. > I suspect that something went wrong creating the configuration volume > on the shared storage at the end of the deployment. > > Alexei, can both of you attach you hosted-engine-setup logs? > Can you please check what happens on > hosted-engine --get-shared-config gateway > > Thanks > > > Simone, my ovirt cluster upgrade from 3.4... and i have too old logs. > > I'm confused by the execution of the hosted-engine --get-shared-config > gateway ... > I get the output "gateway: 10.245.183.1, type: he_conf", but my current > hosted-engine.conf is overwritten by the other hosted-engine.conf. > > old file: > > fqdn = eng.lan > vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749 > vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228 > storage = ssd.lan:/ovirt > service_start_time = 0 > host_id = 3 > console = vnc > domainType = nfs3 > sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763 > connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4 > ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem > ca_subject = "C = EN, L = Test, O = Test, CN = Test" > vdsm_use_ssl = true > gateway = 10.245.183.1 > bridge = ovirtmgmt > metadata_volume_UUID = > metadata_image_UUID = > lockspace_volume_UUID = > lockspace_image_UUID = > > The following are used only for iSCSI storage > iqn = > portal = > user = > password = > port = > > conf_volume_UUID = a20d9700-1b9a-41d8-bb4b-f2b7c168104f > conf_image_UUID = b5f353f5-9357-4aad-b1a3-751d411e6278 > conf = /var/run/ovirt-hosted-engine-ha/vm.conf > vm_disk_vol_id = cd12a59e-7d84-4b4e-98c7-4c68e83ecd7b > spUUID = 00000000-0000-0000-0000-000000000000 > > new rewrite file > > fqdn = eng.lan > vm_disk_id = e9d7a377-e109-4b28-9a43-7a8c8b603749 > vmid = ccdd675a-a58b-495a-9502-3e6a4b7e5228 > storage = ssd.lan:/ovirt > conf = /etc/ovirt-hosted-engine/vm.conf > service_start_time = 0 > host_id = 3 > console = vnc > domainType = nfs3 > spUUID = 036f83d7-39f7-48fd-a73a-3c9ffb3dbe6a > sdUUID = 8905c9ac-d892-478d-8346-63b8fa1c5763 > connectionUUID = ce84071b-86a2-4e82-b4d9-06abf23dfbc4 > ca_cert =/etc/pki/vdsm/libvirt-spice/ca-cert.pem > ca_subject = "C = EN, L = Test, O = Test, CN = Test" > vdsm_use_ssl = true > gateway = 10.245.183.1 > bridge = ovirtmgmt > metadata_volume_UUID = > metadata_image_UUID = > lockspace_volume_UUID = > lockspace_image_UUID = > > The following are used only for iSCSI storage > iqn = > portal = > user = > password = > port = > > And this in all hosts in cluster! > It seems to me that these are some remnants of versions 3.4, 3.5 ... > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > BUMP > > I resolved error "KeyError: 'Configuration value not found: > file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=gateway'". > > This error was caused... "*VDSGenericException: VDSErrorException: > received downloaded data size is wrong (requested 20480, received 10240)*", > the solution is here https://access.redhat.com/solutions/3106231 > > But in my case there is still a problem with the inappropriate parameters > in hosted-engine.conf ... I think I should use "hosted-engine > --set-shared-config" to change the values on the shared storage. This is > right? > Yes, ufortunately you are absolutely right on that: there is a bug there. As a side effect, hosted-engine --set-shared-config and hosted-engine --get-shared-config always refresh the local copy of hosted-engine configuration files with the copy on the shared storage and so you will always end with host_id=1 in /etc/ovirt-hosted-engine/hosted-engine.conf which can lead to SPM conflicts. I'd suggest to manually fix host_id parameter in /etc/ovirt-hosted-engine/hosted-engine.conf to its original value (double check with engine DB with 'sudo -u postgres psql engine -c "SELECT vds_spm_id, vds.vds_name FROM vds"' on the engine VM) to avoid that. https://bugzilla.redhat.com/1543988 > Guru help to solve this. > > Regards, > > Alex. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Tue Feb 13 10:54:20 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Tue, 13 Feb 2018 11:54:20 +0100 Subject: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.2.1 Release is now available In-Reply-To: References: Message-ID: 2018-02-13 11:42 GMT+01:00 Alex K : > Hi all, > > Is this version considered production ready? > Yes, 4.2.1 is considered production ready > > Thanx, > Alex > > > On Mon, Feb 12, 2018 at 7:14 PM, Sandro Bonazzola > wrote: > >> >> >> 2018-02-12 17:43 GMT+01:00 Gianluca Cecchi : >> >>> On Mon, Feb 12, 2018 at 4:22 PM, Lev Veyde wrote: >>> >>>> The oVirt Project is pleased to announce the availability of the oVirt >>>> 4.2.1 Release, as of February 12th, 2018 >>>> >>>> This update is a release of the first in a series of stabilization >>>> updates to the 4.2 >>>> series. >>>> >>>> This release is available now for: >>>> * Red Hat Enterprise Linux 7.4 or later >>>> * CentOS Linux (or similar) 7.4 or later >>>> >>>> This release supports Hypervisor Hosts running: >>>> * Red Hat Enterprise Linux 7.4 or later >>>> * CentOS Linux (or similar) 7.4 or later >>>> * oVirt Node 4.2 >>>> >>>> >>> Hello, >>> could you confirm that for plain CentOS 7.4 hosts there was no changes >>> between rc3 and final 4.2.1? >>> >> >> We had 3 RC after RC3, last one was RC6 https://lists.ovirt.org/pi >> permail/announce/2018-February/000387.html >> >> >> >>> I just updated an environment that was in RC3 and while the engine has >>> been updated, the host says no updates: >>> >>> [root at ov42 ~]# yum update >>> Loaded plugins: fastestmirror, langpacks, product-id, >>> search-disabled-repos >>> Loading mirror speeds from cached hostfile >>> * base: artfiles.org >>> * extras: ba.mirror.garr.it >>> * ovirt-4.2: ftp.nluug.nl >>> * ovirt-4.2-epel: epel.besthosting.ua >>> * updates: ba.mirror.garr.it >>> No packages marked for update >>> >> >> I think mirrors are still syncing, but resources.ovirt.org is updated. >> You can switch from mirrorlist to baseurl in your yum config file if you >> don't want to wait for the mirror to finish the sync. >> >> >> >>> [root at ov42 ~]# >>> >>> The mirrors seem the same that the engine has used some minutes before, >>> so they should be ok... >>> >>> engine packages passed from ovirt-engine-4.2.1.4-1.el7.centos.noarch >>> to ovirt-engine-4.2.1.6-1.el7.centos.noarch >>> >>> Base oVirt related packages on host are currently of this type, since >>> 4.2.1rc3: >>> >>> libvirt-daemon-3.2.0-14.el7_4.7.x86_64 >>> ovirt-host-4.2.1-1.el7.centos.x86_64 >>> ovirt-vmconsole-1.0.4-1.el7.noarch >>> qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64 >>> sanlock-3.5.0-1.el7.x86_64 >>> vdsm-4.20.17-1.el7.centos.x86_64 >>> virt-v2v-1.36.3-6.el7_4.3.x86_64 >>> >>> >> yes, most of the changes in the last 3 rcs were related to cockpit-ovirt >> / ovirt-node / hosted engine >> >> $ cat ovirt-4.2.1_rc4.conf ovirt-4.2.1_rc5.conf ovirt-4.2.1_rc6.conf >> ovirt-4.2.1.conf >> >> # ovirt-engine-4.2.1.4 >> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artif >> acts-el7-x86_64/6483/ >> >> # cockpit-ovirt-0.11.7-0.1 >> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifac >> ts-el7-x86_64/84/ >> >> # ovirt-release42-4.2.1_rc4 >> http://jenkins.ovirt.org/job/ovirt-release_master_build-arti >> facts-el7-x86_64/649/ >> # otopi-1.7.7 >> http://jenkins.ovirt.org/job/otopi_4.2_build-artifacts-el7-x86_64/2/ >> >> # ovirt-host-deploy-1.7.2 >> http://jenkins.ovirt.org/job/ovirt-host-deploy_4.2_build-art >> ifacts-el7-x86_64/4/ >> >> # ovirt-hosted-engine-setup-2.2.9 >> http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_4.2_b >> uild-artifacts-el7-x86_64/5/ >> >> # cockpit-ovirt-0.11.11-0.1 >> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifac >> ts-el7-x86_64/96/ >> >> # ovirt-release42-4.2.1_rc5 >> http://jenkins.ovirt.org/job/ovirt-release_master_build-arti >> facts-el7-x86_64/651/ >> >> # ovirt-engine-appliance-4.2-20180202.1 >> http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_b >> uild-artifacts-el7-x86_64/62/ >> >> # ovirt-node-ng-4.2.0-0.20180205.0 >> http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2-pre_bui >> ld-artifacts-el7-x86_64/212/ >> >> # ovirt-engine-4.2.1.5 >> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifact >> s-el7-x86_64/3/ >> >> # ovirt-engine-4.2.1.6 >> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifact >> s-el7-x86_64/12/ >> >> # ovirt-release42-4.2.1_rc6 >> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifac >> ts-el7-x86_64/164/ >> >> # ovirt-release42-4.2.1 >> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifac >> ts-el7-x86_64/173/ >> >> >> >>> Thanks, >>> Gianluca >>> >>> _______________________________________________ >>> Announce mailing list >>> Announce at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/announce >>> >>> >> >> >> -- >> >> SANDRO BONAZZOLA >> >> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D >> >> Red Hat EMEA >> >> TRIED. TESTED. TRUSTED. >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Feb 13 11:03:28 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 13 Feb 2018 13:03:28 +0200 Subject: [ovirt-users] [ovirt-announce] [ANN] oVirt 4.2.1 Release is now available In-Reply-To: References: Message-ID: Thanx! On Tue, Feb 13, 2018 at 12:54 PM, Sandro Bonazzola wrote: > > > 2018-02-13 11:42 GMT+01:00 Alex K : > >> Hi all, >> >> Is this version considered production ready? >> > > Yes, 4.2.1 is considered production ready > > > > >> >> Thanx, >> Alex >> >> >> On Mon, Feb 12, 2018 at 7:14 PM, Sandro Bonazzola >> wrote: >> >>> >>> >>> 2018-02-12 17:43 GMT+01:00 Gianluca Cecchi : >>> >>>> On Mon, Feb 12, 2018 at 4:22 PM, Lev Veyde wrote: >>>> >>>>> The oVirt Project is pleased to announce the availability of the oVirt >>>>> 4.2.1 Release, as of February 12th, 2018 >>>>> >>>>> This update is a release of the first in a series of stabilization >>>>> updates to the 4.2 >>>>> series. >>>>> >>>>> This release is available now for: >>>>> * Red Hat Enterprise Linux 7.4 or later >>>>> * CentOS Linux (or similar) 7.4 or later >>>>> >>>>> This release supports Hypervisor Hosts running: >>>>> * Red Hat Enterprise Linux 7.4 or later >>>>> * CentOS Linux (or similar) 7.4 or later >>>>> * oVirt Node 4.2 >>>>> >>>>> >>>> Hello, >>>> could you confirm that for plain CentOS 7.4 hosts there was no changes >>>> between rc3 and final 4.2.1? >>>> >>> >>> We had 3 RC after RC3, last one was RC6 https://lists.ovirt.org/pi >>> permail/announce/2018-February/000387.html >>> >>> >>> >>>> I just updated an environment that was in RC3 and while the engine has >>>> been updated, the host says no updates: >>>> >>>> [root at ov42 ~]# yum update >>>> Loaded plugins: fastestmirror, langpacks, product-id, >>>> search-disabled-repos >>>> Loading mirror speeds from cached hostfile >>>> * base: artfiles.org >>>> * extras: ba.mirror.garr.it >>>> * ovirt-4.2: ftp.nluug.nl >>>> * ovirt-4.2-epel: epel.besthosting.ua >>>> * updates: ba.mirror.garr.it >>>> No packages marked for update >>>> >>> >>> I think mirrors are still syncing, but resources.ovirt.org is updated. >>> You can switch from mirrorlist to baseurl in your yum config file if you >>> don't want to wait for the mirror to finish the sync. >>> >>> >>> >>>> [root at ov42 ~]# >>>> >>>> The mirrors seem the same that the engine has used some minutes before, >>>> so they should be ok... >>>> >>>> engine packages passed from ovirt-engine-4.2.1.4-1.el7.centos.noarch >>>> to ovirt-engine-4.2.1.6-1.el7.centos.noarch >>>> >>>> Base oVirt related packages on host are currently of this type, since >>>> 4.2.1rc3: >>>> >>>> libvirt-daemon-3.2.0-14.el7_4.7.x86_64 >>>> ovirt-host-4.2.1-1.el7.centos.x86_64 >>>> ovirt-vmconsole-1.0.4-1.el7.noarch >>>> qemu-kvm-ev-2.9.0-16.el7_4.13.1.x86_64 >>>> sanlock-3.5.0-1.el7.x86_64 >>>> vdsm-4.20.17-1.el7.centos.x86_64 >>>> virt-v2v-1.36.3-6.el7_4.3.x86_64 >>>> >>>> >>> yes, most of the changes in the last 3 rcs were related to cockpit-ovirt >>> / ovirt-node / hosted engine >>> >>> $ cat ovirt-4.2.1_rc4.conf ovirt-4.2.1_rc5.conf ovirt-4.2.1_rc6.conf >>> ovirt-4.2.1.conf >>> >>> # ovirt-engine-4.2.1.4 >>> http://jenkins.ovirt.org/job/ovirt-engine_master_build-artif >>> acts-el7-x86_64/6483/ >>> >>> # cockpit-ovirt-0.11.7-0.1 >>> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifac >>> ts-el7-x86_64/84/ >>> >>> # ovirt-release42-4.2.1_rc4 >>> http://jenkins.ovirt.org/job/ovirt-release_master_build-arti >>> facts-el7-x86_64/649/ >>> # otopi-1.7.7 >>> http://jenkins.ovirt.org/job/otopi_4.2_build-artifacts-el7-x86_64/2/ >>> >>> # ovirt-host-deploy-1.7.2 >>> http://jenkins.ovirt.org/job/ovirt-host-deploy_4.2_build-art >>> ifacts-el7-x86_64/4/ >>> >>> # ovirt-hosted-engine-setup-2.2.9 >>> http://jenkins.ovirt.org/job/ovirt-hosted-engine-setup_4.2_b >>> uild-artifacts-el7-x86_64/5/ >>> >>> # cockpit-ovirt-0.11.11-0.1 >>> http://jenkins.ovirt.org/job/cockpit-ovirt_4.2_build-artifac >>> ts-el7-x86_64/96/ >>> >>> # ovirt-release42-4.2.1_rc5 >>> http://jenkins.ovirt.org/job/ovirt-release_master_build-arti >>> facts-el7-x86_64/651/ >>> >>> # ovirt-engine-appliance-4.2-20180202.1 >>> http://jenkins.ovirt.org/job/ovirt-appliance_ovirt-4.2-pre_b >>> uild-artifacts-el7-x86_64/62/ >>> >>> # ovirt-node-ng-4.2.0-0.20180205.0 >>> http://jenkins.ovirt.org/job/ovirt-node-ng_ovirt-4.2-pre_bui >>> ld-artifacts-el7-x86_64/212/ >>> >>> # ovirt-engine-4.2.1.5 >>> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifact >>> s-el7-x86_64/3/ >>> >>> # ovirt-engine-4.2.1.6 >>> http://jenkins.ovirt.org/job/ovirt-engine_4.2_build-artifact >>> s-el7-x86_64/12/ >>> >>> # ovirt-release42-4.2.1_rc6 >>> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifac >>> ts-el7-x86_64/164/ >>> >>> # ovirt-release42-4.2.1 >>> http://jenkins.ovirt.org/job/ovirt-release_4.2_build-artifac >>> ts-el7-x86_64/173/ >>> >>> >>> >>>> Thanks, >>>> Gianluca >>>> >>>> _______________________________________________ >>>> Announce mailing list >>>> Announce at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/announce >>>> >>>> >>> >>> >>> -- >>> >>> SANDRO BONAZZOLA >>> >>> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D >>> >>> Red Hat EMEA >>> >>> TRIED. TESTED. TRUSTED. >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA > > TRIED. TESTED. TRUSTED. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From s.danzi at hawai.it Tue Feb 13 11:26:09 2018 From: s.danzi at hawai.it (Stefano Danzi) Date: Tue, 13 Feb 2018 12:26:09 +0100 Subject: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1 In-Reply-To: References: Message-ID: <875b57f7-85a8-6666-934e-af2daa80fc80@hawai.it> Strange thing..... after "vdsm-client Host getCapabilities" command, cluster cpu type become "Intel Sandybridge Family". Same thing for all VMs. Now I can run VMs. Il 13/02/2018 11:28, Simone Tiraboschi ha scritto: > Ciao Stefano, > we have to properly indagate this: thanks for the report. > > Can you please attach from your host the output of > - grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf > - vdsm-client Host getCapabilities > > Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1 > upgrade? > > > > On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi > wrote: > > Hello! > > In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start > any VM. > Hosted engine starts regularly. > > I have a sigle host with Hosted Engine. > > Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz > > When I start any VM I get this error: "The CPU type of the cluster > is unknown. Its possible to change the cluster cpu or set a > different one per VM." > > All VMs have " Guest CPU Type: N/D" > > Cluster now has CPU Type "Intel Conroe Family" (I don't remember > cpu type before the upgrade), my CPU should be Ivy Bridge but it > isn't in the dropdown list. > > If I try to select a similar cpu (SandyBridge IBRS) I get an > error. I can't chage cluster cpu type when I have running hosts > with a lower CPU type. > I can't put host in maintenance because? hosted engine is running > on it. > > How I can solve? > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -- Stefano Danzi Responsabile ICT HAWAI ITALIA S.r.l. Via Forte Garofolo, 16 37057 S. Giovanni Lupatoto Verona Italia P. IVA 01680700232 tel. +39/045/8266400 fax +39/045/8266401 Web www.hawai.it -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Tue Feb 13 11:28:45 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Tue, 13 Feb 2018 12:28:45 +0100 Subject: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1 In-Reply-To: <875b57f7-85a8-6666-934e-af2daa80fc80@hawai.it> References: <875b57f7-85a8-6666-934e-af2daa80fc80@hawai.it> Message-ID: On Tue, Feb 13, 2018 at 12:26 PM, Stefano Danzi wrote: > Strange thing..... > > after "vdsm-client Host getCapabilities" command, cluster cpu type become > "Intel Sandybridge Family". Same thing for all VMs. > Can you please share engine.log ? > Now I can run VMs. > > Il 13/02/2018 11:28, Simone Tiraboschi ha scritto: > > Ciao Stefano, > we have to properly indagate this: thanks for the report. > > Can you please attach from your host the output of > - grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf > - vdsm-client Host getCapabilities > > Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1 > upgrade? > > > > On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi wrote: > >> Hello! >> >> In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any VM. >> Hosted engine starts regularly. >> >> I have a sigle host with Hosted Engine. >> >> Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz >> >> When I start any VM I get this error: "The CPU type of the cluster is >> unknown. Its possible to change the cluster cpu or set a different one per >> VM." >> >> All VMs have " Guest CPU Type: N/D" >> >> Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu type >> before the upgrade), my CPU should be Ivy Bridge but it isn't in the >> dropdown list. >> >> If I try to select a similar cpu (SandyBridge IBRS) I get an error. I >> can't chage cluster cpu type when I have running hosts with a lower CPU >> type. >> I can't put host in maintenance because hosted engine is running on it. >> >> How I can solve? >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > -- > > Stefano Danzi > Responsabile ICT > > HAWAI ITALIA S.r.l. > Via Forte Garofolo, 16 > 37057 S. Giovanni Lupatoto Verona Italia > > P. IVA 01680700232 > > tel. +39/045/8266400 <+39%20045%20826%206400> > fax +39/045/8266401 <+39%20045%20826%206401> > Web www.hawai.it > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From enrico.becchetti at pg.infn.it Tue Feb 13 11:48:30 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Tue, 13 Feb 2018 12:48:30 +0100 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! Message-ID: ?Dear All, I have been using ovirt for a long time with three hypervisors and an external engine running in a centos vm . This three hypervisors have HBAs and access to fiber channel storage. Until recently I used version 3.5, then I reinstalled everything from scratch and now I have 4.2. Before formatting everything, I detach the storage data domani (FC) with the virtual machines and reimported it to the new 4.2 and all went well. In this domain there were virtual machines with and without snapshots. Now I have two problems. The first is that if I try to delete a snapshot the process is not end successful and remains hanging and the second problem is that in one case I lost the virtual machine !!! So I need your help to kill the three running zombie tasks because with taskcleaner.sh I can't do anything and then I need to know how I can delete the old snapshots made with the 3.5 without losing other data or without having new processes that terminate correctly. If you want some log files please let me know. Thank you so much. Best Regards Enrico -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2269 bytes Desc: Firma crittografica S/MIME URL: From spfma.tech at e.mail.fr Tue Feb 13 13:09:03 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Tue, 13 Feb 2018 14:09:03 +0100 Subject: [ovirt-users] Network configuration validation error Message-ID: <20180213130903.D177DE2269@smtp01.mail.de> I did not see I had to enable another repo to get this update, so I was sure I had the latest version available ! After adding it, things went a lot better and I was able to update the engine and all the nodes flawlessly to version 4.2.1.6-1.el7.centos Thanks a lot for your help ! The "no default route error" has disappeared indeed. But I still couldn't validate network setup modifications on one node as I still had the following error in the GUI : * must match "^b((25[0-5]|2[0-4]d|[01]dd|d?d)_){3}(25[0-5]|2[0-4]d|[01]dd|d?d)" * Attribute: ipConfiguration.iPv4Addresses[0].gateway So I tried a dummy thing : I put a value in the gateway field for the NIC which doesn't need one (NFS), was able to validate. Then I edited it again, removed the value and was able to validate again ! Regards Le 12-Feb-2018 10:42:30 +0100, mburman at redhat.com a crit: "no default route" bug was fixed only on 4.2.1 Your current version doesn't have the fix On Mon, Feb 12, 2018 at 11:09 AM, wrote: Le 12-Feb-2018 08:06:43 +0100, jbelka at redhat.com a crit: > This option relevant only for the upgrade from 3.6 to 4.0(engine had > different OS major versions), it all other cases the upgrade flow very > similar to upgrade flow of standard engine environment. > > > 1. Put hosted-engine environment to GlobalMaintenance(you can do it via > UI) > 2. Update engine packages(# yum update -y) > 3. Run engine-setup > 4. Disable GlobalMaintenance > So I followed these steps connected in the engine VM and didn't get any error message. But the version showed in the GUI is still 4.2.0.2-1.el7.centos. Yum had no newer packages to install. And I still have the "no default route" and network validation problems. Regards > Could someone explain me at least what "Cluster PROD is at version 4.2 which > is not supported by this upgrade flow. Please fix it before upgrading." > means ? As far as I know 4.2 is the most recent branch available, isn't it ? I have no idea where did you get "Cluster PROD is at version 4.2 which is not supported by this upgrade flow. Please fix it before upgrading." Please do not cut output and provide exact one. IIUC you should do 'yum update ovirt*setup*' and then 'engine-setup' and only after it would finish successfully you would do 'yum -y update'. Maybe that's your problem? Jiri ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Tue Feb 13 13:09:47 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Tue, 13 Feb 2018 15:09:47 +0200 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: References: Message-ID: On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti < enrico.becchetti at pg.infn.it> wrote: > Dear All, > I have been using ovirt for a long time with three hypervisors and an > external engine running in a centos vm . > > This three hypervisors have HBAs and access to fiber channel storage. > Until recently I used version 3.5, then I reinstalled everything from > scratch and now I have 4.2. > > Before formatting everything, I detach the storage data domani (FC) with > the virtual machines and reimported it to the new 4.2 and all went well. In > this domain there were virtual machines with and without snapshots. > > Now I have two problems. The first is that if I try to delete a snapshot > the process is not end successful and remains hanging and the second > problem is that > in one case I lost the virtual machine !!! > Not sure that I fully understand the scneario.' How was the virtual machine got lost if you only tried to delete a snapshot? > > So I need your help to kill the three running zombie tasks because with > taskcleaner.sh I can't do anything and then I need to know how I can delete > the old snapshots > made with the 3.5 without losing other data or without having new > processes that terminate correctly. > > If you want some log files please let me know. > Hi Enrico, Can you please attach the engine and VDSM logs > > Thank you so much. > Best Regards > Enrico > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Tue Feb 13 13:17:39 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Tue, 13 Feb 2018 14:17:39 +0100 Subject: [ovirt-users] Unable to start VM after oVirt Upgrade from 4.2.0 to 4.2.1 In-Reply-To: References: <875b57f7-85a8-6666-934e-af2daa80fc80@hawai.it> Message-ID: On Tue, Feb 13, 2018 at 12:28 PM, Simone Tiraboschi wrote: > > > On Tue, Feb 13, 2018 at 12:26 PM, Stefano Danzi wrote: > >> Strange thing..... >> >> after "vdsm-client Host getCapabilities" command, cluster cpu type become >> "Intel Sandybridge Family". Same thing for all VMs. >> > > Can you please share engine.log ? > OK, I found a specific patch for that issue: https://gerrit.ovirt.org/#/c/86913/ but the patch didn't landed in ovirt-engine-dbscripts-4.2.1.6-1.el7.centos.noarch so every 4.2.0 -> 4.2.1 upgrade will result in that issue if the cluster CPU family is not in Intel Nehalem Family-IBRS Intel Nehalem-IBRS Family Intel Westmere-IBRS Family Intel SandyBridge-IBRS Family Intel Haswell-noTSX-IBRS Family Intel Haswell-IBRS Family Intel Broadwell-noTSX-IBRS Family Intel Broadwell-IBRS Family Intel Skylake Family Intel Skylake-IBRS Family as in your case. Let's see if we can have a quick respin. > > >> Now I can run VMs. >> >> Il 13/02/2018 11:28, Simone Tiraboschi ha scritto: >> >> Ciao Stefano, >> we have to properly indagate this: thanks for the report. >> >> Can you please attach from your host the output of >> - grep cpuType /var/run/ovirt-hosted-engine-ha/vm.conf >> - vdsm-client Host getCapabilities >> >> Can you please attach also engine-setup logs from your 4.2.0 to 4.2.1 >> upgrade? >> >> >> >> On Tue, Feb 13, 2018 at 10:51 AM, Stefano Danzi wrote: >> >>> Hello! >>> >>> In my test system I upgraded from 4.2.0 to 4.2.1 and I can't start any >>> VM. >>> Hosted engine starts regularly. >>> >>> I have a sigle host with Hosted Engine. >>> >>> Host cpu is a Intel(R) Xeon(R) CPU E3-1220 V2 @ 3.10GHz >>> >>> When I start any VM I get this error: "The CPU type of the cluster is >>> unknown. Its possible to change the cluster cpu or set a different one per >>> VM." >>> >>> All VMs have " Guest CPU Type: N/D" >>> >>> Cluster now has CPU Type "Intel Conroe Family" (I don't remember cpu >>> type before the upgrade), my CPU should be Ivy Bridge but it isn't in the >>> dropdown list. >>> >>> If I try to select a similar cpu (SandyBridge IBRS) I get an error. I >>> can't chage cluster cpu type when I have running hosts with a lower CPU >>> type. >>> I can't put host in maintenance because hosted engine is running on it. >>> >>> How I can solve? >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> >> -- >> >> Stefano Danzi >> Responsabile ICT >> >> HAWAI ITALIA S.r.l. >> Via Forte Garofolo, 16 >> 37057 S. Giovanni Lupatoto Verona Italia >> >> P. IVA 01680700232 >> >> tel. +39/045/8266400 <+39%20045%20826%206400> >> fax +39/045/8266401 <+39%20045%20826%206401> >> Web www.hawai.it >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From enrico.becchetti at pg.infn.it Tue Feb 13 13:42:13 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Tue, 13 Feb 2018 14:42:13 +0100 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: References: Message-ID: see the attach files please ... thanks for your attention !!! Best Regards Enrico Il 13/02/2018 14:09, Maor Lipchuk ha scritto: > > > On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti > > wrote: > > ?Dear All, > I have been using ovirt for a long time with three hypervisors and > an external engine running in a centos vm . > > This three hypervisors have HBAs and access to fiber channel > storage. Until recently I used version 3.5, then I reinstalled > everything from scratch and now I have 4.2. > > Before formatting everything, I detach the storage data domani > (FC) with the virtual machines and reimported it to the new 4.2 > and all went well. In > this domain there were virtual machines with and without snapshots. > > Now I have two problems. The first is that if I try to delete a > snapshot the process is not end successful and remains hanging and > the second problem is that > in one case I lost the virtual machine !!! > > > > Not sure that I fully understand the scneario.' > How was the virtual machine got lost if you only tried to delete a > snapshot? > > > So I need your help to kill the three running zombie tasks because > with taskcleaner.sh I can't do anything and then I need to know > how I can delete the old snapshots > made with the 3.5 without losing other data or without having new > processes that terminate correctly. > > If you want some log files please let me know. > > > > Hi Enrico, > > Can you please attach the engine and VDSM logs > > > Thank you so much. > Best Regards > Enrico > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: engine.log.tmp.gz Type: application/gzip Size: 192144 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: scenario.pdf Type: application/pdf Size: 1228080 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2269 bytes Desc: Firma crittografica S/MIME URL: From mlipchuk at redhat.com Tue Feb 13 13:51:56 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Tue, 13 Feb 2018 15:51:56 +0200 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: References: Message-ID: On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti < enrico.becchetti at pg.infn.it> wrote: > see the attach files please ... thanks for your attention !!! > Seems like the engine logs does not contain the entire process, can you please share older logs since the import operation? > Best Regards > Enrico > > > Il 13/02/2018 14:09, Maor Lipchuk ha scritto: > > > > On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti < > enrico.becchetti at pg.infn.it> wrote: > >> Dear All, >> I have been using ovirt for a long time with three hypervisors and an >> external engine running in a centos vm . >> >> This three hypervisors have HBAs and access to fiber channel storage. >> Until recently I used version 3.5, then I reinstalled everything from >> scratch and now I have 4.2. >> >> Before formatting everything, I detach the storage data domani (FC) with >> the virtual machines and reimported it to the new 4.2 and all went well. In >> this domain there were virtual machines with and without snapshots. >> >> Now I have two problems. The first is that if I try to delete a snapshot >> the process is not end successful and remains hanging and the second >> problem is that >> in one case I lost the virtual machine !!! >> > > > Not sure that I fully understand the scneario.' > How was the virtual machine got lost if you only tried to delete a > snapshot? > > >> >> So I need your help to kill the three running zombie tasks because with >> taskcleaner.sh I can't do anything and then I need to know how I can delete >> the old snapshots >> made with the 3.5 without losing other data or without having new >> processes that terminate correctly. >> >> If you want some log files please let me know. >> > > > Hi Enrico, > > Can you please attach the engine and VDSM logs > > >> >> Thank you so much. >> Best Regards >> Enrico >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Tue Feb 13 13:52:23 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Tue, 13 Feb 2018 15:52:23 +0200 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: References: Message-ID: On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk wrote: > > On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti < > enrico.becchetti at pg.infn.it> wrote: > >> see the attach files please ... thanks for your attention !!! >> > > > Seems like the engine logs does not contain the entire process, can you > please share older logs since the import operation? > And VDSM logs as well from your host > > >> Best Regards >> Enrico >> >> >> Il 13/02/2018 14:09, Maor Lipchuk ha scritto: >> >> >> >> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti < >> enrico.becchetti at pg.infn.it> wrote: >> >>> Dear All, >>> I have been using ovirt for a long time with three hypervisors and an >>> external engine running in a centos vm . >>> >>> This three hypervisors have HBAs and access to fiber channel storage. >>> Until recently I used version 3.5, then I reinstalled everything from >>> scratch and now I have 4.2. >>> >>> Before formatting everything, I detach the storage data domani (FC) with >>> the virtual machines and reimported it to the new 4.2 and all went well. In >>> this domain there were virtual machines with and without snapshots. >>> >>> Now I have two problems. The first is that if I try to delete a snapshot >>> the process is not end successful and remains hanging and the second >>> problem is that >>> in one case I lost the virtual machine !!! >>> >> >> >> Not sure that I fully understand the scneario.' >> How was the virtual machine got lost if you only tried to delete a >> snapshot? >> >> >>> >>> So I need your help to kill the three running zombie tasks because with >>> taskcleaner.sh I can't do anything and then I need to know how I can delete >>> the old snapshots >>> made with the 3.5 without losing other data or without having new >>> processes that terminate correctly. >>> >>> If you want some log files please let me know. >>> >> >> >> Hi Enrico, >> >> Can you please attach the engine and VDSM logs >> >> >>> >>> Thank you so much. >>> Best Regards >>> Enrico >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> -- >> _______________________________________________________________________ >> >> Enrico Becchetti Servizio di Calcolo e Reti >> >> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it >> ______________________________________________________________________ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From philipp.richter at linforge.com Tue Feb 13 14:57:40 2018 From: philipp.richter at linforge.com (Philipp Richter) Date: Tue, 13 Feb 2018 15:57:40 +0100 (CET) Subject: [ovirt-users] CentOS 7 Hyperconverged oVirt 4.2 with Self-Hosted-Engine with glusterfs with 2 Hypervisors and 1 glusterfs-Arbiter-only In-Reply-To: References: <544259669.37244.1518449118807.JavaMail.zimbra@linforge.com> Message-ID: <739330687.45130.1518533860492.JavaMail.zimbra@linforge.com> Hi, > The recommended way to install this would be by using one of the > "full" nodes and deploying hosted engine via cockpit there. The > gdeploy plugin in cockpit should allow you to configure the arbiter > node. > > The documentation for deploying RHHI (hyper converged RH product) is > here: > https://access.redhat.com/documentation/en-us/red_hat_hyperconverged_infrastructure/1.1/html-single/deploying_red_hat_hyperconverged_infrastructure/index#deploy Thanks for the documentation pointer about RHHI. I was able to successfully setup all three Nodes. I had to edit the final gdeploy File, as the Installer reserves 20GB per arbiter volume and I don't have that much space available for this POC. The problem now is that I don't see the third node i.e. in the Storage / Volumes / Bricks view, and I get warning messages every few seconds into the /var/log/ovirt-engine/engine.log like: 2018-02-13 15:40:26,188+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [5a8c68e2] Could not add brick 'ovirtpoc03-storage:/gluster_bricks/engine/engine' to volume '2e7a0ac3-3a74-40ba-81ff-d45b2b35aace' - server uuid '0a100f2f-a9ee-4711-b997-b674ee61f539' not found in cluster 'cab4ba5c-10ba-11e8-aed5-00163e6a7af9' 2018-02-13 15:40:26,193+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] (DefaultQuartzScheduler3) [5a8c68e2] Could not add brick 'ovirtpoc03-storage:/gluster_bricks/vmstore/vmstore' to volume '5a356223-8774-4944-9a95-3962a3c657e4' - server uuid '0a100f2f-a9ee-4711-b997-b674ee61f539' not found in cluster 'cab4ba5c-10ba-11e8-aed5-00163e6a7af9' Of course I cannot add the third node as normal oVirt Host as it is slow, has only minimal amount of RAM and the CPU (AMD) is different than that one of the two "real" Hypervisors (Intel). Is there a way to add the third Node only for gluster management, not as Hypervisor? Or is there any other method to at least quieten the log? thanks, -- : Philipp Richter : LINFORGE | Peace of mind for your IT : : T: +43 1 890 79 99 : E: philipp.richter at linforge.com : https://www.xing.com/profile/Philipp_Richter15 : https://www.linkedin.com/in/philipp-richter : : LINFORGE Technologies GmbH : Brehmstra?e 10 : 1110 Wien : ?sterreich : : Firmenbuchnummer: FN 216034y : USt.- Nummer : ATU53054901 : Gerichtsstand: Wien : : LINFORGE? is a registered trademark of LINFORGE, Austria. From cma at cmadams.net Tue Feb 13 15:05:34 2018 From: cma at cmadams.net (Chris Adams) Date: Tue, 13 Feb 2018 09:05:34 -0600 Subject: [ovirt-users] Network and disk inactive after 4.2.1 upgrade Message-ID: <20180213150534.GA15147@cmadams.net> I upgraded my dev cluster from 4.2.0 to 4.2.1 yesterday, and I noticed that all my VMs show the network interfaces unplugged and disks inactive (despite the VMs being up and running just fine). This includes the hosted engine. I had not rebooted VMs after upgrading, so I tried powering one off and on; it would not start until I manually activated the disk. I haven't seen a problem like this before (although it usually means that I did something wrong :) ) - what should I look at? -- Chris Adams From mburman at redhat.com Tue Feb 13 15:08:10 2018 From: mburman at redhat.com (Michael Burman) Date: Tue, 13 Feb 2018 17:08:10 +0200 Subject: [ovirt-users] Network configuration validation error In-Reply-To: <20180213130903.D177DE2269@smtp01.mail.de> References: <20180213130903.D177DE2269@smtp01.mail.de> Message-ID: Thanks for the input, It's weird that you see this bug https://bugzilla.redhat.com/show_bug.cgi?id=1528906 on 4.2.1.6 because it was already tested and verified on 4.2.1.1 I will check this again.. On Tue, Feb 13, 2018 at 3:09 PM, wrote: > > I did not see I had to enable another repo to get this update, so I was > sure I had the latest version available ! > After adding it, things went a lot better and I was able to update the > engine and all the nodes flawlessly to version 4.2.1.6-1.el7.centos > Thanks a lot for your help ! > > The "no default route error" has disappeared indeed. > > But I still couldn't validate network setup modifications on one node as I > still had the following error in the GUI : > > - must match "^b((25[0-5]|2[0-4]d|[01]dd|d?d)_){3}(25[0-5]|2[0-4]d|[01] > dd|d?d)" > - Attribute: ipConfiguration.iPv4Addresses[0].gateway > > So I tried a dummy thing : I put a value in the gateway field for the NIC > which doesn't need one (NFS), was able to validate. Then I edited it again, > removed the value and was able to validate again ! > > Regards > > > Le 12-Feb-2018 10:42:30 +0100, mburman at redhat.com a ?crit: > > "no default route" bug was fixed only on 4.2.1 > Your current version doesn't have the fix > > On Mon, Feb 12, 2018 at 11:09 AM, wrote: > >> >> >> >> >> Le 12-Feb-2018 08:06:43 +0100, jbelka at redhat.com a ?crit: >> >> > This option relevant only for the upgrade from 3.6 to 4.0(engine had >> > different OS major versions), it all other cases the upgrade flow very >> > similar to upgrade flow of standard engine environment. >> > >> > >> > 1. Put hosted-engine environment to GlobalMaintenance(you can do it via >> > UI) >> > 2. Update engine packages(# yum update -y) >> > 3. Run engine-setup >> > 4. Disable GlobalMaintenance >> > >> >> >> So I followed these steps connected in the engine VM and didn't get any >> error message. But the version showed in the GUI is >> still 4.2.0.2-1.el7.centos. Yum had no newer packages to install. And I >> still have the "no default route" and network validation problems. >> Regards >> >> > Could someone explain me at least what "Cluster PROD is at version 4.2 >> which >> > is not supported by this upgrade flow. Please fix it before upgrading." >> > means ? As far as I know 4.2 is the most recent branch available, isn't >> it ? >> >> I have no idea where did you get >> >> "Cluster PROD is at version 4.2 which is not supported by this upgrade >> flow. Please fix it before upgrading." >> >> Please do not cut output and provide exact one. >> >> IIUC you should do 'yum update ovirt*setup*' and then 'engine-setup' >> and only after it would finish successfully you would do 'yum -y update'. >> Maybe that's your problem? >> >> Jiri >> >> ------------------------------ >> FreeMail powered by mail.fr >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > > Michael Burman > > Senior Quality engineer - rhv network - redhat israel > > Red Hat > > > > mburman at redhat.com M: 0545355725 IM: mburman > > > > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Michael Burman Senior Quality engineer - rhv network - redhat israel Red Hat mburman at redhat.com M: 0545355725 IM: mburman -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at ecarnot.net Tue Feb 13 15:26:40 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Tue, 13 Feb 2018 16:26:40 +0100 Subject: [ovirt-users] [Qemu-block] qcow2 images corruption In-Reply-To: <20180213094111.GB5083@localhost.localdomain> References: <20180213094111.GB5083@localhost.localdomain> Message-ID: <9f16e17b-d193-7c88-2145-765fc54e8f29@ecarnot.net> Hello Kevin, Le 13/02/2018 ? 10:41, Kevin Wolf a ?crit?: > Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben: >> TL; DR : qcow2 images keep getting corrupted. Any workaround? > > Not without knowing the cause. Actually, my main concern is mostly about finding the cause rather than correcting my corrupted VMs. Another way to say it : I prefer to help oVirt than help myself. > The first thing to make sure is that the image isn't touched by a second > process while QEMU is running a VM. Indeed, I read some BZ about this issue : they were raised by a user who ran some qemu-img commands on a "mounted" image, thus leading to some corruption. In my case, I'm not playing with this, and the corrupted VMs were only touched by classical oVirt actions. > The classic one is using 'qemu-img > snapshot' on the image of a running VM, which is instant corruption (and > newer QEMU versions have locking in place to prevent this), but we have > seen more absurd cases of things outside QEMU tampering with the image > when we were investigating previous corruption reports. > > This covers the majority of all reports, we haven't had a real > corruption caused by a QEMU bug in ages. May I ask after what QEMU version this kind of locking has been added. As I wrote, our oVirt setup is 3.6 so not recent. > >> After having found (https://access.redhat.com/solutions/1173623) the right >> logical volume hosting the qcow2 image, I can run qemu-img check on it. >> - On 80% of my VMs, I find no errors. >> - On 15% of them, I find Leaked cluster errors that I can correct using >> "qemu-img check -r all" >> - On 5% of them, I find Leaked clusters errors and further fatal errors, >> which can not be corrected with qemu-img. >> In rare cases, qemu-img can correct them, but destroys large parts of the >> image (becomes unusable), and on other cases it can not correct them at all. > > It would be good if you could make the 'qemu-img check' output available > somewhere. See attachment. > > It would be even better if we could have a look at the respective image. > I seem to remember that John (CCed) had a few scripts to analyse > corrupted qcow2 images, maybe we would be able to see something there. I just exported it like this : qemu-img convert /dev/the_correct_path /home/blablah.qcow2.img The resulting file is 32G and I need an idea to transfer this img to you. > >> What I read similar to my case is : >> - usage of qcow2 >> - heavy disk I/O >> - using the virtio-blk driver >> >> In the proxmox thread, they tend to say that using virtio-scsi is the >> solution. Having asked this question to oVirt experts >> (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's >> not clear the driver is to blame. > > This seems very unlikely. The corruption you're seeing is in the qcow2 > metadata, not only in the guest data. Are you saying: - the corruption is in the metadata and in the guest data OR - the corruption is only in the metadata ? > If anything, virtio-scsi exercises > more qcow2 code paths than virtio-blk, so any potential bug that affects > virtio-blk should also affect virtio-scsi, but not the other way around. I get that. > >> I agree with the answer Yaniv Kaul gave to me, saying I have to properly >> report the issue, so I'm longing to know which peculiar information I can >> give you now. > > To be honest, debugging corruption after the fact is pretty hard. We'd > need the 'qemu-img check' output Done. > and ideally the image to do anything, I remember some Redhat people once gave me a temporary access to put heavy file on some dedicated server. Is it still possible? > but I can't promise that anything would come out of this. > > Best would be a reproducer, or at least some operation that you can link > to the appearance of the corruption. Then we could take a more targeted > look at the respective code. Sure. Alas I find no obvious pattern leading to corruption : From the guest side, it appeared with windows 2003, 2008, 2012, linux centOS 6 and 7. It appeared with virtio-blk; and I changed some VMs to used virtio-scsi but it's too soon to see appearance of corruption in that case. As I said, I'm using snapshots VERY rarely, and our versions are too old so we do them the cold way only (VM shutdown). So very safely. The "weirdest" thing we do is to migrate VMs : you see how conservative we are! >> As you can imagine, all this setup is in production, and for most of the >> VMs, I can not "play" with them. Moreover, we launched a campaign of nightly >> stopping every VM, qemu-img check them one by one, then boot. >> So it might take some time before I find another corrupted image. >> (which I'll preciously store for debug) >> >> Other informations : We very rarely do snapshots, but I'm close to imagine >> that automated migrations of VMs could trigger similar behaviors on qcow2 >> images. > > To my knowledge, oVirt only uses external snapshots and creates them > with QMP. This should be perfectly safe because from the perspective of > the qcow2 image being snapshotted, it just means that it gets no new > write requests. > > Migration is something more involved, and if you could relate the > problem to migration, that would certainly be something to look into. In > that case, it would be important to know more about the setup, e.g. is > it migration with shared or non-shared storage? I'm 99% sure the corrupted VMs have never see a snapshot, and 99% sure they have been migrated at most once. For me *this* is the track to follow. We have 2 main 3.6 oVirt DCs each having 4 dedicated LUNs, connected via iSCSI. Two SANs are serving those volumes. These are Equallogic and the setup of each volume contains a check saying : Access type : "Shared" http://psonlinehelp.equallogic.com/V5.0/Content/V5TOC/Allowing_or_disallowing_multi_ho.htm (shared access to the iSCSI target from multiple initiators) To be honest, I've never been comfortable with this point: - In a complete different context, I'm using it to allow two files servers to publish an OCFS2 volume embedded in a clustered-LVM. It is absolutely reliable as *c*LVM and OCFS2 are explicitly written to manage concurrent access. - In the case of oVirt, we are here allowing tens of hosts to connect to the same LUN. This LUN is then managed by a classical LVM setup, but I see here no notion of concurrent access management. To date, I still haven't understood how was managed these concurrent access to the same LUN with no crash. I hope I won't find no skeletons in the closet. >> Last point about the versions we use : yes that's old, yes we're planning to >> upgrade, but we don't know when. > > That would be helpful, too. Nothing is more frustrating that debugging a > bug in an old version only to find that it's already fixed in the > current version (well, except maybe debugging and finding nothing). > > Kevin Exact, but as I wrote to Yaniv, it would be sad to setup a brand new 4.2 DC and to face the bad old issues. For the record, I just finished to setup another 4.2 DC, but it'll be long before I could apply to it a similar workload as the 3.6 production site. -- Nicolas ECARNOT -------------- next part -------------- A non-text attachment was scrubbed... Name: qemu-img_check.txt.gz Type: application/gzip Size: 26453 bytes Desc: not available URL: From nicolas at ecarnot.net Tue Feb 13 16:35:14 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Tue, 13 Feb 2018 17:35:14 +0100 Subject: [ovirt-users] [Qemu-block] qcow2 images corruption In-Reply-To: <9f16e17b-d193-7c88-2145-765fc54e8f29@ecarnot.net> References: <20180213094111.GB5083@localhost.localdomain> <9f16e17b-d193-7c88-2145-765fc54e8f29@ecarnot.net> Message-ID: Le 13/02/2018 ? 16:26, Nicolas Ecarnot a ?crit : >> It would be good if you could make the 'qemu-img check' output available >> somewhere. > I found this : https://github.com/ShijunDeng/qcow2-dump and the transcript (beautiful colors when viewed with "more") is attached : -- Nicolas ECARNOT -------------- next part -------------- Le script a d?but? sur mar. 13 f?vr. 2018 17:31:05 CET ]0;root at serv-hv-adm13:/home[?1034hroot at serv-hv-adm13:/home# /root/qcow2-dump -m check serv-term-adm4-corr.qcow2.img  File: serv-term-adm4-corr.qcow2.img ---------------------------------------------------------------- magic: 0x514649fb version: 2 backing_file_offset: 0x0 backing_file_size: 0 fs_type: xfs virtual_size: 64424509440 / 61440M / 60G disk_size: 36507222016 / 34816M / 34G seek_end: 36507222016 [0x880000000] / 34816M / 34G cluster_bits: 16 cluster_size: 65536 crypt_method: 0 csize_shift: 54 csize_mask: 255 cluster_offset_mask: 0x3fffffffffffff l1_table_offset: 0x76a460000 l1_size: 120 l1_vm_state_index: 120 l2_size: 8192 refcount_order: 4 refcount_bits: 16 refcount_block_bits: 15 refcount_block_size: 32768 refcount_table_offset: 0x10000 refcount_table_clusters: 1 snapshots_offset: 0x0 nb_snapshots: 0 incompatible_features: 00000000 compatible_features: 00000000 autoclear_features: 00000000 ================================================================ Active Snapshot: ---------------------------------------------------------------- L1 Table: [offset: 0x76a460000, len: 120] Result: L1 Table: unaligned: 0, invalid: 0, unused: 53, used: 67 L2 Table: unaligned: 0, invalid: 0, unused: 20304, used: 528560 ================================================================ Refcount Table: ---------------------------------------------------------------- Refcount Table: [offset: 0x10000, len: 8192] Result: Refcount Table: unaligned: 0, invalid: 0, unused: 8175, used: 17 Refcount: error: 4342, leak: 0, unused: 28426, used: 524288 ================================================================ COPIED OFLAG: ---------------------------------------------------------------- Result: L1 Table ERROR OFLAG_COPIED: 1 L2 Table ERROR OFLAG_COPIED: 4323 Active L2 COPIED: 528560 [34639708160 / 33035M / 32G] ================================================================ Active Cluster: ----------------------------------------------------------------  Result: Active Cluster: reuse: 17 ================================================================ Summary: preallocation: off Active Cluster: reuse: 17 Refcount Table: unaligned: 0, invalid: 0, unused: 8175, used: 17 Refcount: error: 4342, leak: 0, rebuild: 4325, unused: 28426, used: 524288 L1 Table: unaligned: 0, invalid: 0, unused: 53, used: 67  oflag copied: 1 L2 Table: unaligned: 0, invalid: 0, unused: 20304, used: 528560  oflag copied: 4323  ################################################################ ### qcow2 image has refcount errors! (=_=#) ### ### and qcow2 image has copied errors! (o_0)? ### ### Sadly: refcount error cause active cluster reused! Orz ### ### Please backup this image and contact the author! ### ################################################################ ================================================================ ]0;root at serv-hv-adm13:/homeroot at serv-hv-adm13:/home# exit Script termin? sur mar. 13 f?vr. 2018 17:31:13 CET From nsoffer at redhat.com Tue Feb 13 17:33:39 2018 From: nsoffer at redhat.com (Nir Soffer) Date: Tue, 13 Feb 2018 17:33:39 +0000 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: On Wed, Jan 24, 2018 at 3:19 PM Alex K wrote: > Hi all, > > I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on > top glusterfs. > On some VMs (especially one Windows server 2016 64bit with 500 GB of > disk). Guest agents are installed at VMs. i almost always observe that > during the backup of the VM the VM is rendered unresponsive (dashboard > shows a question mark at the VM status and VM does not respond to ping or > to anything). > > For scheduled backups I use: > > https://github.com/wefixit-AT/oVirtBackup > > The script does the following: > > 1. snapshot VM (this is done ok without any failure) > This is a very cheap operation > 2. Clone snapshot (this steps renders the VM unresponsive) > This copy 500g of data. In gluster case, it copies 1500g of data, since in glusterfs, the client is doing the replication. Maybe your network or gluster server is too slow? Can you describe the network topology? Please attach also the volume info for the gluster volume, maybe it is not configured in the best way? > 3. Export Clone > This copy 500g to the export domain. If the export domain is on glusterfs as well, you copy now another 1500g of data. > 4. Delete clone > > 5. Delete snapshot > Not clear why do you need to clone the vm before you export it, you can save half of the data copies. If you 4.2, you can backup the vm *while the vm is running* by: - Take a snapshot - Get the vm ovf from the engine api - Download the vm disks using ovirt-imageio and store the snaphosts in your backup storage - Delete a snapshot In this flow, you would copy 500g. Daniel, please correct me if I'm wrong regarding doing this online. Regardless, a vm should not become non-responsive while cloning. Please file a bug for this and attach engine, vdsm, and glusterfs logs. Nir Do you have any similar experience? Any suggestions to address this? > > I have never seen such issue with hosted Linux VMs. > > The cluster has enough storage to accommodate the clone. > > > Thanx, > > Alex > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Feb 13 18:59:16 2018 From: rightkicktech at gmail.com (Alex K) Date: Tue, 13 Feb 2018 20:59:16 +0200 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: Thank you Nir for the below. I am putting some comments inline in blue. On Tue, Feb 13, 2018 at 7:33 PM, Nir Soffer wrote: > On Wed, Jan 24, 2018 at 3:19 PM Alex K wrote: > >> Hi all, >> >> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on >> top glusterfs. >> On some VMs (especially one Windows server 2016 64bit with 500 GB of >> disk). Guest agents are installed at VMs. i almost always observe that >> during the backup of the VM the VM is rendered unresponsive (dashboard >> shows a question mark at the VM status and VM does not respond to ping or >> to anything). >> >> For scheduled backups I use: >> >> https://github.com/wefixit-AT/oVirtBackup >> >> The script does the following: >> >> 1. snapshot VM (this is done ok without any failure) >> > > This is a very cheap operation > > >> 2. Clone snapshot (this steps renders the VM unresponsive) >> > > This copy 500g of data. In gluster case, it copies 1500g of data, since in > glusterfs, the client > is doing the replication. > > Maybe your network or gluster server is too slow? Can you describe the > network topology? > > Please attach also the volume info for the gluster volume, maybe it is not > configured in the > best way? > The network is 1Gbit. The hosts (3 hosts) are decent ones and new hardware with each host having: 32GB RAM, 16 CPU cores and 2 TB of storage in RAID10. The VMS hosted (7 VMs) exhibit high performance. The VMs are Windows 2016 and Windows10. The network topology is: two networks defined at ovirt: ovirtmgmt is for the managment and access network and "storage" is a separate network, where each server is connected with two network cables at a managed switch with mode 6 load balancing. this storage network is used for gluster traffic. Attached the volume configuration. > 3. Export Clone >> > > This copy 500g to the export domain. If the export domain is on glusterfs > as well, you > copy now another 1500g of data. > > Export domain a Synology NAS with NFS share. If the cloning succeeds then export is completed ok. > 4. Delete clone >> >> 5. Delete snapshot >> > > Not clear why do you need to clone the vm before you export it, you can > save half of > the data copies. > Because I cannot export the VM while it is running. It does not provide such option. > > If you 4.2, you can backup the vm *while the vm is running* by: > - Take a snapshot > - Get the vm ovf from the engine api > - Download the vm disks using ovirt-imageio and store the snaphosts in > your backup > storage > - Delete a snapshot > > In this flow, you would copy 500g. > > I am not aware about this option. checking quickly at site this seems that it is still half implemented? Is there any script that I may use and test this? I am interested to have these backups scheduled. > Daniel, please correct me if I'm wrong regarding doing this online. > > Regardless, a vm should not become non-responsive while cloning. Please > file a bug > for this and attach engine, vdsm, and glusterfs logs. > > Nir > > Do you have any similar experience? Any suggestions to address this? >> >> I have never seen such issue with hosted Linux VMs. >> >> The cluster has enough storage to accommodate the clone. >> >> >> Thanx, >> >> Alex >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- Volume Name: vms Type: Replicate Volume ID: 00fee7f3-76e6-42b2-8f66-606b91df4a97 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: gluster2:/gluster/vms/brick Brick2: gluster0:/gluster/vms/brick Brick3: gluster1:/gluster/vms/brick Options Reconfigured: features.shard-block-size: 512MB server.allow-insecure: on performance.strict-o-direct: on network.ping-timeout: 30 storage.owner-gid: 36 storage.owner-uid: 36 user.cifs: off features.shard: on cluster.shd-wait-qlength: 10000 cluster.shd-max-threads: 8 cluster.locking-scheme: granular cluster.data-self-heal-algorithm: full cluster.server-quorum-type: server cluster.quorum-type: auto cluster.eager-lock: enable network.remote-dio: on performance.low-prio-threads: 32 performance.stat-prefetch: off performance.io-cache: off performance.read-ahead: off performance.quick-read: off transport.address-family: inet performance.readdir-ahead: off nfs.disable: on nfs.export-volumes: on cluster.granular-entry-heal: enable performance.cache-size: 1GB server.event-threads: 4 client.event-threads: 4 [root at v0 setel]# From jlawrence at squaretrade.com Wed Feb 14 01:11:04 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Tue, 13 Feb 2018 17:11:04 -0800 Subject: [ovirt-users] hosted engine install fails on useless DHCP lookup Message-ID: <71A2C531-BFC4-41AD-B7CD-CA41D5AA4D29@squaretrade.com> Hello, I'm seeing the hosted engine install fail on an Ansible playbook step. Log below. I tried looking at the file specified for retry, below (/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry); it contains the word, 'localhost'. The log below didn't contain anything I could see that was actionable; given that it was an ansible error, I hunted down the config and enabled logging. On this run the error was different - the installer log was the same, but the reported error (from the installer changed). The first time, the installer said: [ INFO ] TASK [Wait for the host to become non operational] [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150, "changed": false} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up Second: [ INFO ] TASK [Get local vm ip] [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:11:e7:bd | awk '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.093840", "end": "2018-02-13 16:53:08.658556", "rc": 0, "start": "2018-02-13 16:53:08.564716", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook [ INFO ] Stage: Clean up Ansible log below; as with that second snippet, it appears that it was trying to parse out a host name from virsh's list of DHCP leases, couldn't, and died. Which makes sense: I gave it a static IP, and unless I'm missing something, setup should not have been doing that. I verified that the answer file has the IP: OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.181.26.150/24 Anyone see what is wrong here? -j hosted-engine --deploy log: 2018-02-13 16:20:32,138-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Force host-deploy in offline mode] 2018-02-13 16:20:33,041-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-02-13 16:20:33,342-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [include_tasks] 2018-02-13 16:20:33,443-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-02-13 16:20:33,744-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Obtain SSO token using username/password credentials] 2018-02-13 16:20:35,248-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 ok: [localhost] 2018-02-13 16:20:35,550-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Add host] 2018-02-13 16:20:37,053-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 changed: [localhost] 2018-02-13 16:20:37,355-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:100 TASK [Wait for the host to become non operational] 2018-02-13 16:27:48,895-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 {u'_ansible_parsed': True, u'_ansible_no_log': False, u'changed': False, u'attempts': 150, u'invocation': {u'module_args': {u'pattern': u'name=ovirt-1.squaretrade.com', u'fetch_nested': False, u'nested_attributes': []}}, u'ansible_facts': {u'ovirt_hosts': []}} 2018-02-13 16:27:48,995-0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:98 fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150, "changed": false} 2018-02-13 16:27:49,297-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 42 changed: 17 unreachable: 0 skipped: 2 failed: 1 2018-02-13 16:27:49,397-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:94 PLAY RECAP [ovirt-engine-1.squaretrade.com] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed: 0 2018-02-13 16:27:49,498-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:180 ansible-playbook rc: 2 2018-02-13 16:27:49,498-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:187 ansible-playbook stdout: 2018-02-13 16:27:49,499-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:189 to retry, use: --limit @/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry 2018-02-13 16:27:49,499-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils.run:190 ansible-playbook stderr: 2018-02-13 16:27:49,500-0800 DEBUG otopi.context context._executeMethod:143 method exception Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in _executeMethod method['method']() File "/usr/share/ovirt-hosted-engine-setup/scripts/../plugins/gr-he-ansiblesetup/core/misc.py", line 186, in _closeup r = ah.run() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", line 194, in run raise RuntimeError(_('Failed executing ansible-playbook')) RuntimeError: Failed executing ansible-playbook 2018-02-13 16:27:49,512-0800 ERROR otopi.context context._executeMethod:152 Failed to execute stage 'Closing up': Failed executing ansible-playbook 2018-02-13 16:27:49,513-0800 DEBUG otopi.context context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN - - - - - - ansible log snip: 2018-02-13 16:52:47,548 ovirt-hosted-engine-setup-ansible ansible on_any args (,) kwargs {} 2018-02-13 16:52:58,124 ovirt-hosted-engine-setup-ansible ansible on_any args (,) kwargs {} 2018-02-13 16:53:08,954 ovirt-hosted-engine-setup-ansible var changed: host "localhost" var "local_vm_ip" type "" value: "{'stderr_lines': [], u'changed': True, u'end': u'2018-02-13 16:53:08.658556', u'stdout': u'', u'cmd': u"virsh -r net-dhcp-leases default | grep -i 00:16:3e:11:e7:bd | awk '{ print $5 }' | cut -f1 -d'/'", u'rc': 0, u'start': u'2018-02-13 16:53:08.564716', 'attempts': 50, u'stderr': u'', u'delta': u'0:00:00.093840', 'stdout_lines': [], 'failed': True}" From Alex at unix1337.com Wed Feb 14 03:20:31 2018 From: Alex at unix1337.com (Alex Bartonek) Date: Tue, 13 Feb 2018 22:20:31 -0500 Subject: [ovirt-users] Unable to connect to the graphic server Message-ID: I've built and rebuilt about 4 oVirt servers. Consider myself pretty good at this. LOL. So I am setting up a oVirt server for a friend on his r710. CentOS 7, ovirt 4.2. /etc/hosts has the correct IP and FQDN setup. When I build a VM and try to open a console session via SPICE I am unable to connect to the graphic server. I'm connecting from a Windows 10 box. Using virt-manager to connect. I've googled and I just cant seem to find any resolution to this. Now, I did build the server on my home network but the subnet its on is the same.. internal 192.168.1.xxx. The web interface is accessible also. Any hints as to what else I can check? Thanks! Sent with [ProtonMail](https://protonmail.com) Secure Email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsoyer at systea.fr Tue Feb 13 10:07:50 2018 From: fsoyer at systea.fr (fsoyer) Date: Tue, 13 Feb 2018 11:07:50 +0100 Subject: [ovirt-users] VM with multiple vdisks can't migrate Message-ID: <3c75-5a82b900-11-20359180@117816342> Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !??? So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 16:46:29,705+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ?ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin at internal-authz). 2018-02-12 16:46:31,106+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@77951faf, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} ? For the VM with 2 vdisks we see : 2018-02-12 16:49:06,112+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ?ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin at internal-authz). ... 2018-02-12 16:49:16,453+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ? and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ? Thanks -- Cordialement, Frank Soyer ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jsnow at redhat.com Tue Feb 13 23:01:43 2018 From: jsnow at redhat.com (John Snow) Date: Tue, 13 Feb 2018 18:01:43 -0500 Subject: [ovirt-users] [Qemu-block] qcow2 images corruption In-Reply-To: <20180213094111.GB5083@localhost.localdomain> References: <20180213094111.GB5083@localhost.localdomain> Message-ID: On 02/13/2018 04:41 AM, Kevin Wolf wrote: > Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben: >> TL; DR : qcow2 images keep getting corrupted. Any workaround? > > Not without knowing the cause. > > The first thing to make sure is that the image isn't touched by a second > process while QEMU is running a VM. The classic one is using 'qemu-img > snapshot' on the image of a running VM, which is instant corruption (and > newer QEMU versions have locking in place to prevent this), but we have > seen more absurd cases of things outside QEMU tampering with the image > when we were investigating previous corruption reports. > > This covers the majority of all reports, we haven't had a real > corruption caused by a QEMU bug in ages. > >> After having found (https://access.redhat.com/solutions/1173623) the right >> logical volume hosting the qcow2 image, I can run qemu-img check on it. >> - On 80% of my VMs, I find no errors. >> - On 15% of them, I find Leaked cluster errors that I can correct using >> "qemu-img check -r all" >> - On 5% of them, I find Leaked clusters errors and further fatal errors, >> which can not be corrected with qemu-img. >> In rare cases, qemu-img can correct them, but destroys large parts of the >> image (becomes unusable), and on other cases it can not correct them at all. > > It would be good if you could make the 'qemu-img check' output available > somewhere. > > It would be even better if we could have a look at the respective image. > I seem to remember that John (CCed) had a few scripts to analyse > corrupted qcow2 images, maybe we would be able to see something there. > Hi! I did write a pretty simplistic tool for trying to tell the shape of a corruption at a glance. It seems to work pretty similarly to the other tool you already found, but it won't hurt anything to run it: https://github.com/jnsnow/qcheck (Actually, that other tool looks like it has an awful lot of options. I'll have to check it out.) It can print a really upsetting amount of data (especially for very corrupt images), but in the default case, the simple setting should do the trick just fine. You could always put the output from this tool in a pastebin too; it might help me visualize the problem a bit more -- I find seeing the exact offsets and locations of where all the various tables and things to be pretty helpful. You can also always use the "deluge" option and compress it if you want, just don't let it print to your terminal: jsnow at probe (dev) ~/s/qcheck> ./qcheck -xd /home/bos/jsnow/src/qemu/bin/git/install_test_f26.qcow2 > deluge.log; and ls -sh deluge.log 4.3M deluge.log but it compresses down very well: jsnow at probe (dev) ~/s/qcheck> 7z a -t7z -m0=ppmd deluge.ppmd.7z deluge.log jsnow at probe (dev) ~/s/qcheck> ls -s deluge.ppmd.7z 316 deluge.ppmd.7z So I suppose if you want to send along: (1) The basic output without any flags, in a pastebin (2) The zipped deluge output, just in case and I will try my hand at guessing what went wrong. (Also, maybe my tool will totally choke for your image, who knows. It hasn't received an overwhelming amount of testing apart from when I go to use it personally and inevitably wind up displeased with how it handles certain situations, so ...) >> What I read similar to my case is : >> - usage of qcow2 >> - heavy disk I/O >> - using the virtio-blk driver >> >> In the proxmox thread, they tend to say that using virtio-scsi is the >> solution. Having asked this question to oVirt experts >> (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's >> not clear the driver is to blame. > > This seems very unlikely. The corruption you're seeing is in the qcow2 > metadata, not only in the guest data. If anything, virtio-scsi exercises > more qcow2 code paths than virtio-blk, so any potential bug that affects > virtio-blk should also affect virtio-scsi, but not the other way around. > >> I agree with the answer Yaniv Kaul gave to me, saying I have to properly >> report the issue, so I'm longing to know which peculiar information I can >> give you now. > > To be honest, debugging corruption after the fact is pretty hard. We'd > need the 'qemu-img check' output and ideally the image to do anything, > but I can't promise that anything would come out of this. > > Best would be a reproducer, or at least some operation that you can link > to the appearance of the corruption. Then we could take a more targeted > look at the respective code. > >> As you can imagine, all this setup is in production, and for most of the >> VMs, I can not "play" with them. Moreover, we launched a campaign of nightly >> stopping every VM, qemu-img check them one by one, then boot. >> So it might take some time before I find another corrupted image. >> (which I'll preciously store for debug) >> >> Other informations : We very rarely do snapshots, but I'm close to imagine >> that automated migrations of VMs could trigger similar behaviors on qcow2 >> images. > > To my knowledge, oVirt only uses external snapshots and creates them > with QMP. This should be perfectly safe because from the perspective of > the qcow2 image being snapshotted, it just means that it gets no new > write requests. > > Migration is something more involved, and if you could relate the > problem to migration, that would certainly be something to look into. In > that case, it would be important to know more about the setup, e.g. is > it migration with shared or non-shared storage? > >> Last point about the versions we use : yes that's old, yes we're planning to >> upgrade, but we don't know when. > > That would be helpful, too. Nothing is more frustrating that debugging a > bug in an old version only to find that it's already fixed in the > current version (well, except maybe debugging and finding nothing). > > Kevin > And, looking at your other email: "- In the case of oVirt, we are here allowing tens of hosts to connect to the same LUN. This LUN is then managed by a classical LVM setup, but I see here no notion of concurrent access management. To date, I still haven't understood how was managed these concurrent access to the same LUN with no crash." I'm hoping someone else on list can chime in with if this safe or not -- I'm not really familiar with how oVirt does things, but as long as the rest of the stack is sound and nothing else is touching the qcow2 data area, we should be OK, I'd hope. (Though the last big qcow2 corruption I had to debug wound up being in the storage stack and not in QEMU, so I have some prejudices here) anyway, I'll try to help as best as I'm able, but no promises. --js From enrico.becchetti at pg.infn.it Wed Feb 14 08:11:42 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Wed, 14 Feb 2018 09:11:42 +0100 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: <54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it> References: <54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it> Message-ID: <04bfc301-e00b-2532-e2ed-b6c49470a1f2@pg.infn.it> ? Hi, also you can download them throught these links: https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb Thanks again !!!! Best Regards Enrico > Il 13/02/2018 14:52, Maor Lipchuk ha scritto: >> >> >> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk > > wrote: >> >> >> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti >> > > wrote: >> >> see the attach files please ... thanks for your attention !!! >> >> >> >> Seems like the engine logs does not contain the entire process, >> can you please share older logs since the import operation? >> >> >> And VDSM logs as well from your host >> >> Best Regards >> Enrico >> >> >> Il 13/02/2018 14:09, Maor Lipchuk ha scritto: >>> >>> >>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti >>> >> > wrote: >>> >>> ?Dear All, >>> I have been using ovirt for a long time with three >>> hypervisors and an external engine running in a centos vm . >>> >>> This three hypervisors have HBAs and access to fiber >>> channel storage. Until recently I used version 3.5, then >>> I reinstalled everything from scratch and now I have 4.2. >>> >>> Before formatting everything, I detach the storage data >>> domani (FC) with the virtual machines and reimported it >>> to the new 4.2 and all went well. In >>> this domain there were virtual machines with and without >>> snapshots. >>> >>> Now I have two problems. The first is that if I try to >>> delete a snapshot the process is not end successful and >>> remains hanging and the second problem is that >>> in one case I lost the virtual machine !!! >>> >>> >>> >>> Not sure that I fully understand the scneario.' >>> How was the virtual machine got lost if you only tried to >>> delete a snapshot? >>> >>> >>> So I need your help to kill the three running zombie >>> tasks because with taskcleaner.sh I can't do anything >>> and then I need to know how I can delete the old snapshots >>> made with the 3.5 without losing other data or without >>> having new processes that terminate correctly. >>> >>> If you want some log files please let me know. >>> >>> >>> >>> Hi Enrico, >>> >>> Can you please attach the engine and VDSM logs >>> >>> >>> Thank you so much. >>> Best Regards >>> Enrico >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >> >> -- >> _______________________________________________________________________ >> >> Enrico Becchetti Servizio di Calcolo e Reti >> >> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >> ______________________________________________________________________ >> >> >> > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2269 bytes Desc: Firma crittografica S/MIME URL: From didi at redhat.com Wed Feb 14 08:23:51 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 14 Feb 2018 10:23:51 +0200 Subject: [ovirt-users] Unable to connect to the graphic server In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek wrote: > I've built and rebuilt about 4 oVirt servers. Consider myself pretty good > at this. LOL. > So I am setting up a oVirt server for a friend on his r710. CentOS 7, ovirt > 4.2. /etc/hosts has the correct IP and FQDN setup. > > When I build a VM and try to open a console session via SPICE I am unable > to connect to the graphic server. I'm connecting from a Windows 10 box. > Using virt-manager to connect. What happens when you try? > > I've googled and I just cant seem to find any resolution to this. Now, I > did build the server on my home network but the subnet its on is the same.. > internal 192.168.1.xxx. The web interface is accessible also. > > Any hints as to what else I can check? If virt-viewer does open up but fails to connect, check (e.g. with netstat) where it tries to connect to. Check that you have network access there (no filtering/routing/NAT/etc issues), that qemu on the host is listening on the port it tries etc. If it does not open, try to tell your browser (if it does not already) to not open it automatically, but ask you what to do. Then save the file you get and check it. Best regards, > > Thanks! > > > Sent with ProtonMail Secure Email. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Didi From stirabos at redhat.com Wed Feb 14 08:52:51 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Wed, 14 Feb 2018 09:52:51 +0100 Subject: [ovirt-users] CentOS 7 Hyperconverged oVirt 4.2 with Self-Hosted-Engine with glusterfs with 2 Hypervisors and 1 glusterfs-Arbiter-only In-Reply-To: <739330687.45130.1518533860492.JavaMail.zimbra@linforge.com> References: <544259669.37244.1518449118807.JavaMail.zimbra@linforge.com> <739330687.45130.1518533860492.JavaMail.zimbra@linforge.com> Message-ID: On Tue, Feb 13, 2018 at 3:57 PM, Philipp Richter < philipp.richter at linforge.com> wrote: > Hi, > > > The recommended way to install this would be by using one of the > > "full" nodes and deploying hosted engine via cockpit there. The > > gdeploy plugin in cockpit should allow you to configure the arbiter > > node. > > > > The documentation for deploying RHHI (hyper converged RH product) is > > here: > > https://access.redhat.com/documentation/en-us/red_hat_ > hyperconverged_infrastructure/1.1/html-single/deploying_red_ > hat_hyperconverged_infrastructure/index#deploy > > Thanks for the documentation pointer about RHHI. > I was able to successfully setup all three Nodes. I had to edit the final > gdeploy File, as the Installer reserves 20GB per arbiter volume and I don't > have that much space available for this POC. > > The problem now is that I don't see the third node i.e. in the Storage / > Volumes / Bricks view, and I get warning messages every few seconds into > the /var/log/ovirt-engine/engine.log like: > > 2018-02-13 15:40:26,188+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler3) [5a8c68e2] Could not add brick > 'ovirtpoc03-storage:/gluster_bricks/engine/engine' to volume > '2e7a0ac3-3a74-40ba-81ff-d45b2b35aace' - server uuid > '0a100f2f-a9ee-4711-b997-b674ee61f539' not found in cluster > 'cab4ba5c-10ba-11e8-aed5-00163e6a7af9' > 2018-02-13 15:40:26,193+01 WARN [org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListReturn] > (DefaultQuartzScheduler3) [5a8c68e2] Could not add brick > 'ovirtpoc03-storage:/gluster_bricks/vmstore/vmstore' to volume > '5a356223-8774-4944-9a95-3962a3c657e4' - server uuid > '0a100f2f-a9ee-4711-b997-b674ee61f539' not found in cluster > 'cab4ba5c-10ba-11e8-aed5-00163e6a7af9' > > Of course I cannot add the third node as normal oVirt Host as it is slow, > has only minimal amount of RAM and the CPU (AMD) is different than that one > of the two "real" Hypervisors (Intel). > > Is there a way to add the third Node only for gluster management, not as > Hypervisor? Or is there any other method to at least quieten the log? > Adding Sahina here. > > thanks, > -- > > : Philipp Richter > : LINFORGE | Peace of mind for your IT > : > : T: +43 1 890 79 99 > : E: philipp.richter at linforge.com > : https://www.xing.com/profile/Philipp_Richter15 > : https://www.linkedin.com/in/philipp-richter > : > : LINFORGE Technologies GmbH > : Brehmstra?e 10 > : 1110 Wien > : ?sterreich > : > : Firmenbuchnummer: FN 216034y > : USt.- Nummer : ATU53054901 > : Gerichtsstand: Wien > : > : LINFORGE? is a registered trademark of LINFORGE, Austria. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsoyer at systea.fr Wed Feb 14 09:23:22 2018 From: fsoyer at systea.fr (fsoyer) Date: Wed, 14 Feb 2018 10:23:22 +0100 Subject: [ovirt-users] VMs with multiple vdisks don't migrate Message-ID: <73f0-5a840000-231-3a296d00@233471441> Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !??? So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 16:46:29,705+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ?ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin at internal-authz). 2018-02-12 16:46:31,106+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@77951faf, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} ? For the VM with 2 vdisks we see : 2018-02-12 16:49:06,112+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ?ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin at internal-authz). ... 2018-02-12 16:49:16,453+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ? and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ? Thanks Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. -- Cordialement, Frank Soyer -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Wed Feb 14 09:27:38 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Wed, 14 Feb 2018 10:27:38 +0100 Subject: [ovirt-users] hosted engine install fails on useless DHCP lookup In-Reply-To: <71A2C531-BFC4-41AD-B7CD-CA41D5AA4D29@squaretrade.com> References: <71A2C531-BFC4-41AD-B7CD-CA41D5AA4D29@squaretrade.com> Message-ID: On Wed, Feb 14, 2018 at 2:11 AM, Jamie Lawrence wrote: > Hello, > > I'm seeing the hosted engine install fail on an Ansible playbook step. Log > below. I tried looking at the file specified for retry, below > (/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry); > it contains the word, 'localhost'. > > The log below didn't contain anything I could see that was actionable; > given that it was an ansible error, I hunted down the config and enabled > logging. On this run the error was different - the installer log was the > same, but the reported error (from the installer changed). > > The first time, the installer said: > > [ INFO ] TASK [Wait for the host to become non operational] > [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": > []}, "attempts": 150, "changed": false} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO ] Stage: Clean up > 'localhost' here is not an issue by itself: the playbook is executed on the host against the same host over a local connection so localhost is absolutely fine there. Maybe you hit this one: https://bugzilla.redhat.com/show_bug.cgi?id=1540451 It seams NetworkManager related but still not that clear. Stopping NetworkManager and starting network before the deployment seams to help. > > > Second: > > [ INFO ] TASK [Get local vm ip] > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, > "cmd": "virsh -r net-dhcp-leases default | grep -i 00:16:3e:11:e7:bd | awk > '{ print $5 }' | cut -f1 -d'/'", "delta": "0:00:00.093840", "end": > "2018-02-13 16:53:08.658556", "rc": 0, "start": "2018-02-13 > 16:53:08.564716", "stderr": "", "stderr_lines": [], "stdout": "", > "stdout_lines": []} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO ] Stage: Clean up > > > > Ansible log below; as with that second snippet, it appears that it was > trying to parse out a host name from virsh's list of DHCP leases, couldn't, > and died. > > Which makes sense: I gave it a static IP, and unless I'm missing > something, setup should not have been doing that. I verified that the > answer file has the IP: > > OVEHOSTED_VM/cloudinitVMStaticCIDR=str:10.181.26.150/24 > > Anyone see what is wrong here? > This is absolutely fine. The new ansible based flow (also called node zero) uses an engine running on a local virtual machine to bootstrap the system. The bootstrap local VM runs over libvirt default natted network with its own dhcp instance, that's why we are consuming it. The locally running engine will create a target virtual machine on the shared storage and that one will be instead configured as you specified. > > -j > > > hosted-engine --deploy log: > > 2018-02-13 16:20:32,138-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 TASK [Force host-deploy in offline mode] > 2018-02-13 16:20:33,041-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 changed: [localhost] > 2018-02-13 16:20:33,342-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 TASK [include_tasks] > 2018-02-13 16:20:33,443-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 ok: [localhost] > 2018-02-13 16:20:33,744-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 TASK [Obtain SSO token using > username/password credentials] > 2018-02-13 16:20:35,248-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 ok: [localhost] > 2018-02-13 16:20:35,550-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 TASK [Add host] > 2018-02-13 16:20:37,053-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 changed: [localhost] > 2018-02-13 16:20:37,355-0800 INFO otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:100 TASK [Wait for the host to become non > operational] > 2018-02-13 16:27:48,895-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:94 {u'_ansible_parsed': True, > u'_ansible_no_log': False, u'changed': False, u'attempts': 150, > u'invocation': {u'module_args': {u'pattern': u'name= > ovirt-1.squaretrade.com', u'fetch_nested': False, u'nested_attributes': > []}}, u'ansible_facts': {u'ovirt_hosts': []}} > 2018-02-13 16:27:48,995-0800 ERROR otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:98 fatal: [localhost]: FAILED! => > {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150, "changed": false} > 2018-02-13 16:27:49,297-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:94 PLAY RECAP [localhost] : ok: 42 changed: > 17 unreachable: 0 skipped: 2 failed: 1 > 2018-02-13 16:27:49,397-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils._process_output:94 PLAY RECAP [ovirt-engine-1.squaretrade. > com] : ok: 15 changed: 8 unreachable: 0 skipped: 4 failed: 0 > 2018-02-13 16:27:49,498-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils.run:180 ansible-playbook rc: 2 > 2018-02-13 16:27:49,498-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils.run:187 ansible-playbook stdout: > 2018-02-13 16:27:49,499-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils.run:189 to retry, use: --limit @/usr/share/ovirt-hosted- > engine-setup/ansible/bootstrap_local_vm.retry > > 2018-02-13 16:27:49,499-0800 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils > ansible_utils.run:190 ansible-playbook stderr: > 2018-02-13 16:27:49,500-0800 DEBUG otopi.context > context._executeMethod:143 method exception > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/otopi/context.py", line 133, in > _executeMethod > method['method']() > File "/usr/share/ovirt-hosted-engine-setup/scripts/../ > plugins/gr-he-ansiblesetup/core/misc.py", line 186, in _closeup > r = ah.run() > File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_setup/ansible_utils.py", > line 194, in run > raise RuntimeError(_('Failed executing ansible-playbook')) > RuntimeError: Failed executing ansible-playbook > 2018-02-13 16:27:49,512-0800 ERROR otopi.context > context._executeMethod:152 Failed to execute stage 'Closing up': Failed > executing ansible-playbook > 2018-02-13 16:27:49,513-0800 DEBUG otopi.context > context.dumpEnvironment:859 ENVIRONMENT DUMP - BEGIN > > > - - - - - - > > ansible log snip: > > 2018-02-13 16:52:47,548 ovirt-hosted-engine-setup-ansible ansible on_any > args ( 0x7f00dc19f850>,) kwargs {} > 2018-02-13 16:52:58,124 ovirt-hosted-engine-setup-ansible ansible on_any > args (,) > kwargs {} > 2018-02-13 16:53:08,954 ovirt-hosted-engine-setup-ansible var changed: > host "localhost" var "local_vm_ip" type "" value: > "{'stderr_lines': [], u'changed': True, u'end': u'2018-02-13 > 16:53:08.658556', u'stdout': u'', u'cmd': u"virsh -r net-dhcp-leases > default | grep -i 00:16:3e:11:e7:bd | awk '{ print $5 }' | cut -f1 -d'/'", > u'rc': 0, u'start': u'2018-02-13 16:53:08.564716', 'attempts': 50, > u'stderr': u'', u'delta': u'0:00:00.093840', 'stdout_lines': [], 'failed': > True}" > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Wed Feb 14 10:03:10 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Wed, 14 Feb 2018 12:03:10 +0200 Subject: [ovirt-users] VM with multiple vdisks can't migrate In-Reply-To: <3c75-5a82b900-11-20359180@117816342> References: <3c75-5a82b900-11-20359180@117816342> Message-ID: Hi Frank, Can you please attach the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.f r and v ictor.local.systea.fr Thanks, Maor On Tue, Feb 13, 2018 at 12:07 PM, fsoyer wrote: > Hi all, > I discovered yesterday a problem when migrating VM with more than one > vdisk. > On our test servers (oVirt4.1, shared storage with Gluster), I created 2 > VMs needed for a test, from a template with a 20G vdisk. On this VMs I > added a 100G vdisk (for this tests I didn't want to waste time to extend > the existing vdisks... But I lost time finally...). The VMs with the 2 > vdisks works well. > Now I saw some updates waiting on the host. I tried to put it in > maintenance... But it stopped on the two VM. They were marked "migrating", > but no more accessible. Other (small) VMs with only 1 vdisk was migrated > without problem at the same time. > I saw that a kvm process for the (big) VMs was launched on the source AND > destination host, but after tens of minutes, the migration and the VMs was > always freezed. I tried to cancel the migration for the VMs : failed. The > only way to stop it was to poweroff the VMs : the kvm process died on the 2 > hosts and the GUI alerted on a failed migration. > In doubt, I tried to delete the second vdisk on one of this VMs : it > migrates then without error ! And no access problem. > I tried to extend the first vdisk of the second VM, the delete the second > vdisk : it migrates now without problem ! > > So after another test with a VM with 2 vdisks, I can say that this blocked > the migration process :( > > In engine.log, for a VMs with 1 vdisk migrating well, we see : > > 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to > object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', > sharedLocks=''}' > 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] > Running command: MigrateVmToServerCommand internal: false. Entities > affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group > MIGRATE_VM with role type USER > 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] > START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', > hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', > dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' > 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', > migrationDowntime='0', autoConverge='true', migrateCompressed='false', > consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', > maxIncomingMigrations='2', maxOutgoingMigrations='2', > convergenceSchedule='[init=[{name=setDowntime, params=[100]}], > stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, > action={name=setDowntime, params=[200]}}, {limit=3, > action={name=setDowntime, params=[300]}}, {limit=4, > action={name=setDowntime, params=[400]}}, {limit=6, > action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, > params=[]}}]]'}), log id: 14f61ee0 > 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) > [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName > = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', > hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', > dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' > 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', > migrationDowntime='0', autoConverge='true', migrateCompressed='false', > consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', > maxIncomingMigrations='2', maxOutgoingMigrations='2', > convergenceSchedule='[init=[{name=setDowntime, params=[100]}], > stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, > action={name=setDowntime, params=[200]}}, {limit=3, > action={name=setDowntime, params=[300]}}, {limit=4, > action={name=setDowntime, params=[400]}}, {limit=6, > action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, > params=[]}}]]'}), log id: 775cd381 > 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) > [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, > log id: 775cd381 > 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] > FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 > 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) > [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), > Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: > 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, > Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, > Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, > User: admin at internal-authz). > 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) > [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, > FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', > vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 > 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) > [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, > emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, > guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, > QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, > timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, > guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, > custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId=' > 879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', > device='unix', type='CHANNEL', bootOrder='0', specParams='[]', > address='{bus=0, controller=0, type=virtio-serial, port=1}', > managed='false', plugged='true', readOnly='false', deviceAlias='channel0', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- > a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= > VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', > type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, > port=1}', managed='false', plugged='true', readOnly='false', > deviceAlias='input0', customProperties='[]', snapshotId='null', > logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- > a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' > fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', > device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', > address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', > managed='false', plugged='true', readOnly='false', deviceAlias='ide', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- > abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ > deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=2}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel1', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, > vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, > bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, > numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, > kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, > devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, > clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 > 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) > [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' > 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) > [54a65b66] Received a vnc Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, > displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, > port=5901} > 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) > [54a65b66] Received a lease Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, > sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, > offset=6291456, device=lease, path=/rhev/data-center/mnt/ > glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, > type=lease} > 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly > detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( > ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a- > f17d7cd87bb1') > 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS > 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) ignoring > it in the refresh until migration is done > .... > 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS > 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) > 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] > START, DestroyVDSCommand(HostName = victor.local.systea.fr, > DestroyVmVDSCommandParameters:{runAsync='true', > hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', > secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log > id: 560eca57 > 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] > FINISH, DestroyVDSCommand, log id: 560eca57 > 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from > 'MigratingFrom' --> 'Down' > 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing > over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host > 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' > 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from > 'MigratingTo' --> 'Up' > 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] > START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, > MigrateStatusVDSCommandParameters:{runAsync='true', > hostId='d569c2dd-8f30-4878-8aea-858db285cf69', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 > 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] > FINISH, MigrateStatusVDSCommand, log id: 7a25c281 > 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] > EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, > Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom > ID: null, Custom Event ID: -1, Message: Migration completed (VM: > Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: > ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual > downtime: (N/A)) > 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (ForkJoinPool-1-worker-4) [] Lock freed to object > 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', > sharedLocks=''}' > 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] > START, FullListVDSCommand(HostName = ginger.local.systea.fr, > FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', > vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 > 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] > FINISH, FullListVDSCommand, return: [{acpiEnable=true, > emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, > tabletEnable=true, pid=18748, guestDiskMapping={}, > transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, > guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93- > 49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02- > 565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=1}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel0', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- > a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= > VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', > type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, > port=1}', managed='false', plugged='true', readOnly='false', > deviceAlias='input0', customProperties='[]', snapshotId='null', > logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- > a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' > fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', > device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', > address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', > managed='false', plugged='true', readOnly='false', deviceAlias='ide', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- > abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ > deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=2}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel1', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, > vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, > bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, > numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, > maxMemSlots=16, kvmEnable=true, pitReinjection=false, > displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, > memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 > <(430)%20425-9600>, display=vnc}], log id: 7cc65298 > 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] > Received a vnc Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, > displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, > port=5901} > 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] > Received a lease Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, > sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, > offset=6291456, device=lease, path=/rhev/data-center/mnt/ > glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, > type=lease} > 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) > [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, > emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, > tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_ > HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, > QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, > timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang. > Object;@77951faf, custom={device_fbddd528-7d93- > 49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02- > 565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=1}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel0', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- > a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= > VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', > type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, > port=1}', managed='false', plugged='true', readOnly='false', > deviceAlias='input0', customProperties='[]', snapshotId='null', > logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- > a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' > fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', > device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', > address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', > managed='false', plugged='true', readOnly='false', deviceAlias='ide', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- > abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ > deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=2}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel1', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, > vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, > bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, > numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, > maxMemSlots=16, kvmEnable=true, pitReinjection=false, > displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, > memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 > <(430)%20426-3620>, display=vnc}], log id: 58cdef4c > 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) > [7fcb200a] Received a vnc Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, > displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, > port=5901} > 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) > [7fcb200a] Received a lease Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, > sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, > offset=6291456, device=lease, path=/rhev/data-center/mnt/ > glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, > type=lease} > > > > > For the VM with 2 vdisks we see : > > 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to > object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', > sharedLocks=''}' > 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] > Running command: MigrateVmToServerCommand internal: false. Entities > affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group > MIGRATE_VM with role type USER > 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] > START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', > hostId='d569c2dd-8f30-4878-8aea-858db285cf69', > vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', > dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' > 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', > migrationDowntime='0', autoConverge='true', migrateCompressed='false', > consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', > maxIncomingMigrations='2', maxOutgoingMigrations='2', > convergenceSchedule='[init=[{name=setDowntime, params=[100]}], > stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, > action={name=setDowntime, params=[200]}}, {limit=3, > action={name=setDowntime, params=[300]}}, {limit=4, > action={name=setDowntime, params=[400]}}, {limit=6, > action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, > params=[]}}]]'}), log id: 3702a9e0 > 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) > [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName > = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', > hostId='d569c2dd-8f30-4878-8aea-858db285cf69', > vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', > dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' > 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', > migrationDowntime='0', autoConverge='true', migrateCompressed='false', > consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', > maxIncomingMigrations='2', maxOutgoingMigrations='2', > convergenceSchedule='[init=[{name=setDowntime, params=[100]}], > stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, > action={name=setDowntime, params=[200]}}, {limit=3, > action={name=setDowntime, params=[300]}}, {limit=4, > action={name=setDowntime, params=[400]}}, {limit=6, > action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, > params=[]}}]]'}), log id: 1840069c > 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) > [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, > log id: 1840069c > 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] > FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 > 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) > [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), > Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: > f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, > Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, > Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, > User: admin at internal-authz). > ... > 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) > [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' > 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM > 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly > detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( > victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea- > 858db285cf69') > 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM > 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS > 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring > it in the refresh until migration is done > ... > 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM > 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly > detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( > victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea- > 858db285cf69') > 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM > 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS > 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring > it in the refresh until migration is done > > > > and so on, last lines repeated indefinitly for hours since we poweroff the > VM... > Is this something known ? Any idea about that ? > > Thanks > -- > > Cordialement, > > *Frank Soyer * > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Wed Feb 14 10:04:10 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Wed, 14 Feb 2018 12:04:10 +0200 Subject: [ovirt-users] VMs with multiple vdisks don't migrate In-Reply-To: <73f0-5a840000-231-3a296d00@233471441> References: <73f0-5a840000-231-3a296d00@233471441> Message-ID: Hi Frank, I already replied on your last email. Can you provide the VDSM logs from the time of the migration failure for both hosts: ginger.local.systea.f r and v ictor.local.systea.fr Thanks, Maor On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: > Hi all, > I discovered yesterday a problem when migrating VM with more than one > vdisk. > On our test servers (oVirt4.1, shared storage with Gluster), I created 2 > VMs needed for a test, from a template with a 20G vdisk. On this VMs I > added a 100G vdisk (for this tests I didn't want to waste time to extend > the existing vdisks... But I lost time finally...). The VMs with the 2 > vdisks works well. > Now I saw some updates waiting on the host. I tried to put it in > maintenance... But it stopped on the two VM. They were marked "migrating", > but no more accessible. Other (small) VMs with only 1 vdisk was migrated > without problem at the same time. > I saw that a kvm process for the (big) VMs was launched on the source AND > destination host, but after tens of minutes, the migration and the VMs was > always freezed. I tried to cancel the migration for the VMs : failed. The > only way to stop it was to poweroff the VMs : the kvm process died on the 2 > hosts and the GUI alerted on a failed migration. > In doubt, I tried to delete the second vdisk on one of this VMs : it > migrates then without error ! And no access problem. > I tried to extend the first vdisk of the second VM, the delete the second > vdisk : it migrates now without problem ! > > So after another test with a VM with 2 vdisks, I can say that this blocked > the migration process :( > > In engine.log, for a VMs with 1 vdisk migrating well, we see : > > 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to > object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', > sharedLocks=''}' > 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] > Running command: MigrateVmToServerCommand internal: false. Entities > affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group > MIGRATE_VM with role type USER > 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] > START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', > hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', > dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' > 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', > migrationDowntime='0', autoConverge='true', migrateCompressed='false', > consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', > maxIncomingMigrations='2', maxOutgoingMigrations='2', > convergenceSchedule='[init=[{name=setDowntime, params=[100]}], > stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, > action={name=setDowntime, params=[200]}}, {limit=3, > action={name=setDowntime, params=[300]}}, {limit=4, > action={name=setDowntime, params=[400]}}, {limit=6, > action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, > params=[]}}]]'}), log id: 14f61ee0 > 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) > [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName > = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', > hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', > dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' > 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', > migrationDowntime='0', autoConverge='true', migrateCompressed='false', > consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', > maxIncomingMigrations='2', maxOutgoingMigrations='2', > convergenceSchedule='[init=[{name=setDowntime, params=[100]}], > stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, > action={name=setDowntime, params=[200]}}, {limit=3, > action={name=setDowntime, params=[300]}}, {limit=4, > action={name=setDowntime, params=[400]}}, {limit=6, > action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, > params=[]}}]]'}), log id: 775cd381 > 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) > [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, > log id: 775cd381 > 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] > FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 > 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) > [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), > Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: > 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, > Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, > Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, > User: admin at internal-authz). > 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) > [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, > FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', > vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 > 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) > [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, > emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, > guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, > QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, > timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, > guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, > custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId=' > 879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', > device='unix', type='CHANNEL', bootOrder='0', specParams='[]', > address='{bus=0, controller=0, type=virtio-serial, port=1}', > managed='false', plugged='true', readOnly='false', deviceAlias='channel0', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- > a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= > VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', > type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, > port=1}', managed='false', plugged='true', readOnly='false', > deviceAlias='input0', customProperties='[]', snapshotId='null', > logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- > a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' > fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', > device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', > address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', > managed='false', plugged='true', readOnly='false', deviceAlias='ide', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- > abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ > deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=2}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel1', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, > vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, > bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, > numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, > kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, > devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, > clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 > 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) > [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' > 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) > [54a65b66] Received a vnc Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, > displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, > port=5901} > 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) > [54a65b66] Received a lease Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, > sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, > offset=6291456, device=lease, path=/rhev/data-center/mnt/ > glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, > type=lease} > 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly > detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( > ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a- > f17d7cd87bb1') > 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS > 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) ignoring > it in the refresh until migration is done > .... > 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS > 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) > 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] > START, DestroyVDSCommand(HostName = victor.local.systea.fr, > DestroyVmVDSCommandParameters:{runAsync='true', > hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', > secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log > id: 560eca57 > 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] > FINISH, DestroyVDSCommand, log id: 560eca57 > 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from > 'MigratingFrom' --> 'Down' > 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing > over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host > 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' > 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM > '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from > 'MigratingTo' --> 'Up' > 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] > START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, > MigrateStatusVDSCommandParameters:{runAsync='true', > hostId='d569c2dd-8f30-4878-8aea-858db285cf69', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 > 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] > FINISH, MigrateStatusVDSCommand, log id: 7a25c281 > 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] > EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, > Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom > ID: null, Custom Event ID: -1, Message: Migration completed (VM: > Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: > ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual > downtime: (N/A)) > 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (ForkJoinPool-1-worker-4) [] Lock freed to object > 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', > sharedLocks=''}' > 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] > START, FullListVDSCommand(HostName = ginger.local.systea.fr, > FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', > vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 > 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] > FINISH, FullListVDSCommand, return: [{acpiEnable=true, > emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, > tabletEnable=true, pid=18748, guestDiskMapping={}, > transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, > guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93- > 49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02- > 565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=1}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel0', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- > a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= > VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', > type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, > port=1}', managed='false', plugged='true', readOnly='false', > deviceAlias='input0', customProperties='[]', snapshotId='null', > logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- > a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' > fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', > device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', > address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', > managed='false', plugged='true', readOnly='false', deviceAlias='ide', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- > abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ > deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=2}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel1', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, > vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, > bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, > numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, > maxMemSlots=16, kvmEnable=true, pitReinjection=false, > displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, > memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 > <(430)%20425-9600>, display=vnc}], log id: 7cc65298 > 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] > Received a vnc Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, > displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, > port=5901} > 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] > Received a lease Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, > sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, > offset=6291456, device=lease, path=/rhev/data-center/mnt/ > glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, > type=lease} > 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) > [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, > emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, > tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_ > HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, > QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, > timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang. > Object;@77951faf, custom={device_fbddd528-7d93- > 49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02- > 565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=1}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel0', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- > a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6= > VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', > type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, > port=1}', managed='false', plugged='true', readOnly='false', > deviceAlias='input0', customProperties='[]', snapshotId='null', > logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6- > a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId=' > fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', > device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', > address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', > managed='false', plugged='true', readOnly='false', deviceAlias='ide', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_ > 879c93ab-4df1-435c-af02-565039fcc254device_8945f61a- > abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{ > deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', > vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', > type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, > controller=0, type=virtio-serial, port=2}', managed='false', > plugged='true', readOnly='false', deviceAlias='channel1', > customProperties='[]', snapshotId='null', logicalName='null', > hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, > vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, > bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, > numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, > maxMemSlots=16, kvmEnable=true, pitReinjection=false, > displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, > memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 > <(430)%20426-3620>, display=vnc}], log id: 58cdef4c > 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) > [7fcb200a] Received a vnc Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, > displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, > port=5901} > 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) > [7fcb200a] Received a lease Device without an address when processing VM > 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: > {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, > sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, > offset=6291456, device=lease, path=/rhev/data-center/mnt/ > glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, > type=lease} > > > > > For the VM with 2 vdisks we see : > > 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to > object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', > sharedLocks=''}' > 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] > (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] > Running command: MigrateVmToServerCommand internal: false. Entities > affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group > MIGRATE_VM with role type USER > 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] > START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', > hostId='d569c2dd-8f30-4878-8aea-858db285cf69', > vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', > dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' > 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', > migrationDowntime='0', autoConverge='true', migrateCompressed='false', > consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', > maxIncomingMigrations='2', maxOutgoingMigrations='2', > convergenceSchedule='[init=[{name=setDowntime, params=[100]}], > stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, > action={name=setDowntime, params=[200]}}, {limit=3, > action={name=setDowntime, params=[300]}}, {limit=4, > action={name=setDowntime, params=[400]}}, {limit=6, > action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, > params=[]}}]]'}), log id: 3702a9e0 > 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) > [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName > = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', > hostId='d569c2dd-8f30-4878-8aea-858db285cf69', > vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', > dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' > 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', > migrationDowntime='0', autoConverge='true', migrateCompressed='false', > consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', > maxIncomingMigrations='2', maxOutgoingMigrations='2', > convergenceSchedule='[init=[{name=setDowntime, params=[100]}], > stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, > action={name=setDowntime, params=[200]}}, {limit=3, > action={name=setDowntime, params=[300]}}, {limit=4, > action={name=setDowntime, params=[400]}}, {limit=6, > action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, > params=[]}}]]'}), log id: 1840069c > 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) > [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, > log id: 1840069c > 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] > (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] > FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 > 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) > [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), > Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: > f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, > Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, > Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, > User: admin at internal-authz). > ... > 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) > [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' > 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM > 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly > detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( > victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea- > 858db285cf69') > 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM > 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS > 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring > it in the refresh until migration is done > ... > 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM > 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly > detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( > victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea- > 858db285cf69') > 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core. > vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM > 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS > 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring > it in the refresh until migration is done > > > > and so on, last lines repeated indefinitly for hours since we poweroff the > VM... > Is this something known ? Any idea about that ? > > Thanks > > Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. > > -- > > Cordialement, > > *Frank Soyer * > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Wed Feb 14 10:46:26 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 14 Feb 2018 11:46:26 +0100 Subject: [ovirt-users] Slow conversion from VMware in 4.1 In-Reply-To: <20180206101929.GV2787@redhat.com> References: <20180125090849.GA8265@redhat.com> <20180125100629.GT2787@redhat.com> <20180202115221.GA2787@redhat.com> <20180205221330.GR2787@redhat.com> <20180206101929.GV2787@redhat.com> Message-ID: On Tue, Feb 6, 2018 at 11:19 AM, Richard W.M. Jones wrote: > On Tue, Feb 06, 2018 at 11:11:37AM +0100, Luca 'remix_tj' Lorenzetto wrote: >> Il 6 feb 2018 10:52 AM, "Yaniv Kaul" ha scritto: >> >> >> I assume its network interfaces are also a bottleneck as well. Certainly if >> they are 1g. >> Y. >> >> >> That's not the case, vcenter uses 10g and also all the involved hosts. >> >> We first supposed the culprit was network, but investigations has cleared >> its position. Network usage is under 40% with 4 ongoing migrations. > > The problem is two-fold and is common to all vCenter transformations: > > (1) A single https connection is used and each block of data that is > requested is processed serially. > > (2) vCenter has to forward each request to the ESXi hypervisor. > > (1) + (2) => most time is spent waiting on the lengthy round trips for > each requested block of data. > > This is why overlapping multiple parallel conversions works and > (although each conversion is just as slow) improves throughput, > because you're filling in the long idle gaps by serving other > conversions. > [cut] FYI it was a cpu utilization issue. Now that vcenter has a lower average cpu usage, migration times halved and returned back to the original estimations. Thank Richard for the infos about virt-v2v, we improved our knowledge on this tool :-) Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From reznikov_aa at soskol.com Wed Feb 14 11:26:47 2018 From: reznikov_aa at soskol.com (Reznikov Alexei) Date: Wed, 14 Feb 2018 14:26:47 +0300 Subject: [ovirt-users] ovirt 4.1 unable deploy HostedEngine on next host Configuration value not found: file=/etc/.../hosted-engine.conf In-Reply-To: References: <9d1bf913-c3b3-b200-4e1b-633be93f6e23@soskol.com> <7004f464280bca707b1b0912dcf07988@soskol.com> <78fc3942-87cf-da85-d35d-e20fb958f5ff@soskol.com> Message-ID: <3fb0a7b6-5584-fd6d-4030-4344fbd55411@soskol.com> 13.02.2018 13:42, Simone Tiraboschi ?????: > > Yes, ufortunately you are absolutely right on that: there is a bug there. > As a side effect, hosted-engine --set-shared-config and hosted-engine > --get-shared-config always refresh the local copy of hosted-engine > configuration files with the copy on the shared storage and so you > will always end with host_id=1 in > /etc/ovirt-hosted-engine/hosted-engine.conf which can lead to SPM > conflicts. > I'd suggest to manually fix host_id parameter in > /etc/ovirt-hosted-engine/hosted-engine.conf to its original value > (double check with engine DB with 'sudo -u postgres psql engine -c > "SELECT vds_spm_id, vds.vds_name FROM vds"' on the engine VM) to avoid > that. > https://bugzilla.redhat.com/1543988 Simon, I'm trying to set the right values ... but unfortunately I fail. [root at h3 ovirt-hosted-engine]# cat hosted-engine.conf | grep conf_ conf_volume_UUID=a20d9700-1b9a-41d8-bb4b-f2b7c168104f conf_image_UUID=b5f353f5-9357-4aad-b1a3-751d411e6278 [root at h3 ~]# hosted-engine --set-shared-config conf_image_UUID b5f353f5-9357-4aad-b1a3-751d411e6278 --type he_conf Traceback (most recent call last): ? File "/usr/lib64/python2.7/runpy.py", line 162, in _run_module_as_main ? ..... ? File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/env/config.py", line 226, in get ??? key KeyError: 'Configuration value not found: file=/etc/ovirt-hosted-engine/hosted-engine.conf, key=conf_volume_UUID' How to fix this, else there is any way to edit hosted-engine.conf on shared storage? Regards, Alex. -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.lloyd at keele.ac.uk Wed Feb 14 13:27:06 2018 From: g.lloyd at keele.ac.uk (Gary Lloyd) Date: Wed, 14 Feb 2018 13:27:06 +0000 Subject: [ovirt-users] Ovirt 3.6 to 4.2 upgrade In-Reply-To: References: Message-ID: Hi Yaniv We attempted to share the code a few years back, but I don't think it got accepted. In vdsm.conf we have two bridged interfaces, each connected to a SAN uplink: [irs] iscsi_default_ifaces = san1,san2 And here is a diff of the file /usr/lib/python2.7/site-packages/vdsm/storage/ vs the original for vdsm-4.20.17-1 : 463,498c463,464 < < # Original Code ## < < #iscsi.addIscsiNode(self._iface, self._target, self._cred) < #timeout = config.getint("irs", "udev_settle_timeout") < #udevadm.settle(timeout) < < ### Altered Code for EqualLogic Direct LUNs for Keele University : G.Lloyd ### < < ifaceNames = config.get('irs', 'iscsi_default_ifaces').split(',') < if not ifaceNames: < iscsi.addIscsiNode(self._iface, self._target, self._cred) < else: < self.log.debug("Connecting on interfaces: {}".format(ifaceNames)) < #for ifaceName in ifaceNames: < success = False < while ifaceNames: < self.log.debug("Remaining interfaces to try: {}".format(ifaceNames)) < ifaceName = ifaceNames.pop() < try: < self.log.debug("Connecting on {}".format(ifaceName)) < iscsi.addIscsiNode(iscsi.IscsiInterface(ifaceName), self._target, self._cred) < self.log.debug("Success connecting on {}".format(ifaceName)) < success = True < except: < self.log.debug("Failure connecting on interface {}".format(ifaceName)) < if ifaceNames: < self.log.debug("More iscsi interfaces to try, continuing") < pass < elif success: < self.log.debug("Already succeded on an interface, continuing") < pass < else: < self.log.debug("Could not connect to iscsi target on any interface, raising exception") < raise < timeout = config.getint("irs", "scsi_settle_timeout") --- > iscsi.addIscsiNode(self._iface, self._target, self._cred) > timeout = config.getint("irs", "udev_settle_timeout") 501,502d466 < ### End of Custom Alterations ### < Regards *Gary Lloyd* ________________________________________________ I.T. Systems:Keele University Finance & IT Directorate Keele:Staffs:IC1 Building:ST5 5NB:UK +44 1782 733063 <%2B44%201782%20733073> ________________________________________________ On 11 February 2018 at 08:38, Yaniv Kaul wrote: > > > On Fri, Feb 9, 2018 at 4:06 PM, Gary Lloyd wrote: > >> Hi >> >> Is it possible/supported to upgrade from Ovirt 3.6 straight to Ovirt 4.2 ? >> > > No, you go through 4.0, 4.1. > > >> Does live migration still function between the older vdsm nodes and vdsm >> nodes with software built against Ovirt 4.2 ? >> > > Yes, keep the cluster level at 3.6. > > >> >> We changed a couple of the vdsm python files to enable iscsi multipath on >> direct luns. >> (It's a fairly simple change to a couple of the python files). >> > > Nice! > Can you please contribute those patches to oVirt? > Y. > > >> >> We've been running it this way since 2012 (Ovirt 3.2). >> >> Many Thanks >> >> *Gary Lloyd* >> ________________________________________________ >> I.T. Systems:Keele University >> Finance & IT Directorate >> Keele:Staffs:IC1 Building:ST5 5NB:UK >> +44 1782 733063 <%2B44%201782%20733073> >> ________________________________________________ >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Wed Feb 14 13:34:56 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Wed, 14 Feb 2018 15:34:56 +0200 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: <04bfc301-e00b-2532-e2ed-b6c49470a1f2@pg.infn.it> References: <54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it> <04bfc301-e00b-2532-e2ed-b6c49470a1f2@pg.infn.it> Message-ID: Seems like all the engine logs are full with the same error. >From vdsm.log.16.xz I can see an error which might explain this failure: 2018-02-12 07:51:16,161+0100 INFO (ioprocess communication (40573)) [IOProcess] Starting ioprocess (__init__:447) 2018-02-12 07:51:16,201+0100 INFO (jsonrpc/3) [vdsm.api] FINISH mergeSnapshots return=None from=::ffff:10.0.0.46,57032, flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568, task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (api:52) 2018-02-12 07:51:16,275+0100 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Image.mergeSnapshots succeeded in 0.13 seconds (__init__:573) 2018-02-12 07:51:16,276+0100 INFO (tasks/1) [storage.ThreadPool.WorkerThread] START task 1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd=>, args=None) (threadPool:208) 2018-02-12 07:51:16,543+0100 INFO (tasks/1) [storage.Image] sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID= imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5 ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825 successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False discard=False (image:1240) 2018-02-12 07:51:16,669+0100 ERROR (tasks/1) [storage.TaskManager.Task] (Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1853, in mergeSnapshots discard) File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line 1251, in merge srcVol = vols[successor] KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd' Ala, maybe you know if there is any known issue with mergeSnapshots? The usecase here are VMs from oVirt 3.5 which got registered to oVirt 4.2. Regards, Maor On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti < enrico.becchetti at pg.infn.it> wrote: > Hi, > also you can download them throught these > links: > > https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD > https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb > > Thanks again !!!! > > Best Regards > Enrico > > Il 13/02/2018 14:52, Maor Lipchuk ha scritto: > > > > On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk wrote: > >> >> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti < >> enrico.becchetti at pg.infn.it> wrote: >> >>> see the attach files please ... thanks for your attention !!! >>> >> >> >> Seems like the engine logs does not contain the entire process, can you >> please share older logs since the import operation? >> > > And VDSM logs as well from your host > > >> >> >>> Best Regards >>> Enrico >>> >>> >>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto: >>> >>> >>> >>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti < >>> enrico.becchetti at pg.infn.it> wrote: >>> >>>> Dear All, >>>> I have been using ovirt for a long time with three hypervisors and an >>>> external engine running in a centos vm . >>>> >>>> This three hypervisors have HBAs and access to fiber channel storage. >>>> Until recently I used version 3.5, then I reinstalled everything from >>>> scratch and now I have 4.2. >>>> >>>> Before formatting everything, I detach the storage data domani (FC) >>>> with the virtual machines and reimported it to the new 4.2 and all went >>>> well. In >>>> this domain there were virtual machines with and without snapshots. >>>> >>>> Now I have two problems. The first is that if I try to delete a >>>> snapshot the process is not end successful and remains hanging and the >>>> second problem is that >>>> in one case I lost the virtual machine !!! >>>> >>> >>> >>> Not sure that I fully understand the scneario.' >>> How was the virtual machine got lost if you only tried to delete a >>> snapshot? >>> >>> >>>> >>>> So I need your help to kill the three running zombie tasks because with >>>> taskcleaner.sh I can't do anything and then I need to know how I can delete >>>> the old snapshots >>>> made with the 3.5 without losing other data or without having new >>>> processes that terminate correctly. >>>> >>>> If you want some log files please let me know. >>>> >>> >>> >>> Hi Enrico, >>> >>> Can you please attach the engine and VDSM logs >>> >>> >>>> >>>> Thank you so much. >>>> Best Regards >>>> Enrico >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> -- >>> _______________________________________________________________________ >>> >>> Enrico Becchetti Servizio di Calcolo e Reti >>> >>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it >>> ______________________________________________________________________ >>> >>> >> > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at ecarnot.net Wed Feb 14 14:51:43 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Wed, 14 Feb 2018 15:51:43 +0100 Subject: [ovirt-users] [Qemu-block] qcow2 images corruption In-Reply-To: References: <20180213094111.GB5083@localhost.localdomain> Message-ID: https://framadrop.org/r/Lvvr392QZo#/wOeYUUlHQAtkUw1E+x2YdqTqq21Pbic6OPBIH0TjZE= Le 14/02/2018 ? 00:01, John Snow a ?crit?: > > > On 02/13/2018 04:41 AM, Kevin Wolf wrote: >> Am 07.02.2018 um 18:06 hat Nicolas Ecarnot geschrieben: >>> TL; DR : qcow2 images keep getting corrupted. Any workaround? >> >> Not without knowing the cause. >> >> The first thing to make sure is that the image isn't touched by a second >> process while QEMU is running a VM. The classic one is using 'qemu-img >> snapshot' on the image of a running VM, which is instant corruption (and >> newer QEMU versions have locking in place to prevent this), but we have >> seen more absurd cases of things outside QEMU tampering with the image >> when we were investigating previous corruption reports. >> >> This covers the majority of all reports, we haven't had a real >> corruption caused by a QEMU bug in ages. >> >>> After having found (https://access.redhat.com/solutions/1173623) the right >>> logical volume hosting the qcow2 image, I can run qemu-img check on it. >>> - On 80% of my VMs, I find no errors. >>> - On 15% of them, I find Leaked cluster errors that I can correct using >>> "qemu-img check -r all" >>> - On 5% of them, I find Leaked clusters errors and further fatal errors, >>> which can not be corrected with qemu-img. >>> In rare cases, qemu-img can correct them, but destroys large parts of the >>> image (becomes unusable), and on other cases it can not correct them at all. >> >> It would be good if you could make the 'qemu-img check' output available >> somewhere. >> >> It would be even better if we could have a look at the respective image. >> I seem to remember that John (CCed) had a few scripts to analyse >> corrupted qcow2 images, maybe we would be able to see something there. >> > > Hi! I did write a pretty simplistic tool for trying to tell the shape of > a corruption at a glance. It seems to work pretty similarly to the other > tool you already found, but it won't hurt anything to run it: > > https://github.com/jnsnow/qcheck > > (Actually, that other tool looks like it has an awful lot of options. > I'll have to check it out.) > > It can print a really upsetting amount of data (especially for very > corrupt images), but in the default case, the simple setting should do > the trick just fine. > > You could always put the output from this tool in a pastebin too; it > might help me visualize the problem a bit more -- I find seeing the > exact offsets and locations of where all the various tables and things > to be pretty helpful. > > You can also always use the "deluge" option and compress it if you want, > just don't let it print to your terminal: > > jsnow at probe (dev) ~/s/qcheck> ./qcheck -xd > /home/bos/jsnow/src/qemu/bin/git/install_test_f26.qcow2 > deluge.log; > and ls -sh deluge.log > 4.3M deluge.log > > but it compresses down very well: > > jsnow at probe (dev) ~/s/qcheck> 7z a -t7z -m0=ppmd deluge.ppmd.7z deluge.log > jsnow at probe (dev) ~/s/qcheck> ls -s deluge.ppmd.7z > 316 deluge.ppmd.7z > > So I suppose if you want to send along: > (1) The basic output without any flags, in a pastebin > (2) The zipped deluge output, just in case > > and I will try my hand at guessing what went wrong. > > > (Also, maybe my tool will totally choke for your image, who knows. It > hasn't received an overwhelming amount of testing apart from when I go > to use it personally and inevitably wind up displeased with how it > handles certain situations, so ...) > >>> What I read similar to my case is : >>> - usage of qcow2 >>> - heavy disk I/O >>> - using the virtio-blk driver >>> >>> In the proxmox thread, they tend to say that using virtio-scsi is the >>> solution. Having asked this question to oVirt experts >>> (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but it's >>> not clear the driver is to blame. >> >> This seems very unlikely. The corruption you're seeing is in the qcow2 >> metadata, not only in the guest data. If anything, virtio-scsi exercises >> more qcow2 code paths than virtio-blk, so any potential bug that affects >> virtio-blk should also affect virtio-scsi, but not the other way around. >> >>> I agree with the answer Yaniv Kaul gave to me, saying I have to properly >>> report the issue, so I'm longing to know which peculiar information I can >>> give you now. >> >> To be honest, debugging corruption after the fact is pretty hard. We'd >> need the 'qemu-img check' output and ideally the image to do anything, >> but I can't promise that anything would come out of this. >> >> Best would be a reproducer, or at least some operation that you can link >> to the appearance of the corruption. Then we could take a more targeted >> look at the respective code. >> >>> As you can imagine, all this setup is in production, and for most of the >>> VMs, I can not "play" with them. Moreover, we launched a campaign of nightly >>> stopping every VM, qemu-img check them one by one, then boot. >>> So it might take some time before I find another corrupted image. >>> (which I'll preciously store for debug) >>> >>> Other informations : We very rarely do snapshots, but I'm close to imagine >>> that automated migrations of VMs could trigger similar behaviors on qcow2 >>> images. >> >> To my knowledge, oVirt only uses external snapshots and creates them >> with QMP. This should be perfectly safe because from the perspective of >> the qcow2 image being snapshotted, it just means that it gets no new >> write requests. >> >> Migration is something more involved, and if you could relate the >> problem to migration, that would certainly be something to look into. In >> that case, it would be important to know more about the setup, e.g. is >> it migration with shared or non-shared storage? >> >>> Last point about the versions we use : yes that's old, yes we're planning to >>> upgrade, but we don't know when. >> >> That would be helpful, too. Nothing is more frustrating that debugging a >> bug in an old version only to find that it's already fixed in the >> current version (well, except maybe debugging and finding nothing). >> >> Kevin >> > And, looking at your other email: > > "- In the case of oVirt, we are here allowing tens of hosts to connect > to the same LUN. This LUN is then managed by a classical LVM setup, but > I see here no notion of concurrent access management. To date, I still > haven't understood how was managed these concurrent access to the same > LUN with no crash." > > I'm hoping someone else on list can chime in with if this safe or not -- > I'm not really familiar with how oVirt does things, but as long as the > rest of the stack is sound and nothing else is touching the qcow2 data > area, we should be OK, I'd hope. > > (Though the last big qcow2 corruption I had to debug wound up being in > the storage stack and not in QEMU, so I have some prejudices here) > > > > > anyway, I'll try to help as best as I'm able, but no promises. > > --js > -- Nicolas ECARNOT From enrico.becchetti at pg.infn.it Wed Feb 14 14:53:49 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Wed, 14 Feb 2018 15:53:49 +0100 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: References: <54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it> <04bfc301-e00b-2532-e2ed-b6c49470a1f2@pg.infn.it> Message-ID: <53ba550f-53ad-8c09-bbd7-7cce50538f94@pg.infn.it> Dear All, old snapsahots seem to be the problem. In fact domain DATA_FC running in 3.5 had some lvm snapshot volume. Before deactivate DATA_FC? I didin't remove this snapshots so when I attach this volume to new ovirt 4.2 and import all vm at the same time I also import all snapshots but now How I can remove them ? Throught ovirt web interface the remove tasks running are still hang. Are there any other methods ? Thank to following this case. Best Regads Enrico Il 14/02/2018 14:34, Maor Lipchuk ha scritto: > Seems like all the engine logs are full with the same error. > From vdsm.log.16.xz?I can see an error which might explain this failure: > > 2018-02-12 07:51:16,161+0100 INFO ?(ioprocess communication (40573)) > [IOProcess] Starting ioprocess (__init__:447) > 2018-02-12 07:51:16,201+0100 INFO ?(jsonrpc/3) [vdsm.api] FINISH > mergeSnapshots return=None from=::ffff:10.0.0.46,57032, > flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568, > task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (api:52) > 2018-02-12 07:51:16,275+0100 INFO ?(jsonrpc/3) [jsonrpc.JsonRpcServer] > RPC call Image.mergeSnapshots succeeded in 0.13 seconds (__init__:573) > 2018-02-12 07:51:16,276+0100 INFO ?(tasks/1) > [storage.ThreadPool.WorkerThread] START task > 1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd= >, args=None) > (threadPool:208) > 2018-02-12 07:51:16,543+0100 INFO ?(tasks/1) [storage.Image] > sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID= > imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5 > ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825 > successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False > discard=False (image:1240) > 2018-02-12 07:51:16,669+0100 ERROR (tasks/1) > [storage.TaskManager.Task] > (Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error (task:875) > Traceback (most recent call last): > ? File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line > 882, in _run > ? ? return fn(*args, **kargs) > ? File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line > 336, in run > ? ? return self.cmd(*self.argslist, **self.argsdict) > ? File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", > line 79, in wrapper > ? ? return method(self, *args, **kwargs) > ? File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line > 1853, in mergeSnapshots > ? ? discard) > ? File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line > 1251, in merge > ? ? srcVol = vols[successor] > KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd' > > Ala, maybe you know if there is any known issue with mergeSnapshots? > The usecase here are VMs from oVirt 3.5 which got registered to oVirt 4.2. > > Regards, > Maor > > > On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti > > wrote: > > ? Hi, > also you can download them throught these > links: > > https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD > > https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb > > > Thanks again !!!! > > Best Regards > Enrico > >> Il 13/02/2018 14:52, Maor Lipchuk ha scritto: >>> >>> >>> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk >>> > wrote: >>> >>> >>> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti >>> >> > wrote: >>> >>> see the attach files please ... thanks for your >>> attention !!! >>> >>> >>> >>> Seems like the engine logs does not contain the entire >>> process, can you please share older logs since the import >>> operation? >>> >>> >>> And VDSM logs as well from your host >>> >>> Best Regards >>> Enrico >>> >>> >>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto: >>>> >>>> >>>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti >>>> >>> > wrote: >>>> >>>> ?Dear All, >>>> I have been using ovirt for a long time with three >>>> hypervisors and an external engine running in a >>>> centos vm . >>>> >>>> This three hypervisors have HBAs and access to >>>> fiber channel storage. Until recently I used >>>> version 3.5, then I reinstalled everything from >>>> scratch and now I have 4.2. >>>> >>>> Before formatting everything, I detach the storage >>>> data domani (FC) with the virtual machines and >>>> reimported it to the new 4.2 and all went well. In >>>> this domain there were virtual machines with and >>>> without snapshots. >>>> >>>> Now I have two problems. The first is that if I try >>>> to delete a snapshot the process is not end >>>> successful and remains hanging and the second >>>> problem is that >>>> in one case I lost the virtual machine !!! >>>> >>>> >>>> >>>> Not sure that I fully understand the scneario.' >>>> How was the virtual machine got lost if you only tried >>>> to delete a snapshot? >>>> >>>> >>>> So I need your help to kill the three running >>>> zombie tasks because with taskcleaner.sh I can't do >>>> anything and then I need to know how I can delete >>>> the old snapshots >>>> made with the 3.5 without losing other data or >>>> without having new processes that terminate correctly. >>>> >>>> If you want some log files please let me know. >>>> >>>> >>>> >>>> Hi Enrico, >>>> >>>> Can you please attach the engine and VDSM logs >>>> >>>> >>>> Thank you so much. >>>> Best Regards >>>> Enrico >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>>> >>> >>> -- >>> _______________________________________________________________________ >>> >>> Enrico Becchetti Servizio di Calcolo e Reti >>> >>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>> ______________________________________________________________________ >>> >>> >>> >> >> -- >> _______________________________________________________________________ >> >> Enrico Becchetti Servizio di Calcolo e Reti >> >> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >> ______________________________________________________________________ > > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2269 bytes Desc: Firma crittografica S/MIME URL: From andreil1 at starlett.lv Wed Feb 14 15:26:47 2018 From: andreil1 at starlett.lv (Andrei V) Date: Wed, 14 Feb 2018 17:26:47 +0200 Subject: [ovirt-users] Q: Upgrade 4.2 -> 4.2.1 Dependency Problem Message-ID: <77FC6D61-31BE-4E50-A92E-FC3CBD931A67@starlett.lv> Hi ! I run into unexpected problem upgrading oVirt node (installed manually on CentOS): This problem have to be fixed manually otherwise upgrade command from host engine also fail. -> glusterfs-rdma = 3.12.5-2.el7 was installed manually as a dependency resolution for ovirt-host-4.2.1-1.el7.centos.x86_64 Q: How to get around this problem? Thanks in advance. Error: Package: ovirt-host-4.2.1-1.el7.centos.x86_64 (ovirt-4.2) Requires: glusterfs-rdma Removing: glusterfs-rdma-3.12.5-2.el7.x86_64 (@ovirt-4.2-centos-gluster312) glusterfs-rdma = 3.12.5-2.el7 Obsoleted By: mlnx-ofa_kernel-3.4-OFED.3.4.2.1.5.1.ged26eb5.1.rhel7u3.x86_64 (HP-spp) Not found Available: glusterfs-rdma-3.8.4-18.4.el7.centos.x86_64 (base) glusterfs-rdma = 3.8.4-18.4.el7.centos Available: glusterfs-rdma-3.12.0-1.el7.x86_64 (ovirt-4.2-centos-gluster312) glusterfs-rdma = 3.12.0-1.el7 Available: glusterfs-rdma-3.12.1-1.el7.x86_64 (ovirt-4.2-centos-gluster312) glusterfs-rdma = 3.12.1-1.el7 Available: glusterfs-rdma-3.12.1-2.el7.x86_64 (ovirt-4.2-centos-gluster312) glusterfs-rdma = 3.12.1-2.el7 Available: glusterfs-rdma-3.12.3-1.el7.x86_64 (ovirt-4.2-centos-gluster312) glusterfs-rdma = 3.12.3-1.el7 Available: glusterfs-rdma-3.12.4-1.el7.x86_64 (ovirt-4.2-centos-gluster312) glusterfs-rdma = 3.12.4-1.el7 From lorenzetto.luca at gmail.com Wed Feb 14 15:31:59 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 14 Feb 2018 16:31:59 +0100 Subject: [ovirt-users] Q: Upgrade 4.2 -> 4.2.1 Dependency Problem In-Reply-To: <77FC6D61-31BE-4E50-A92E-FC3CBD931A67@starlett.lv> References: <77FC6D61-31BE-4E50-A92E-FC3CBD931A67@starlett.lv> Message-ID: On Wed, Feb 14, 2018 at 4:26 PM, Andrei V wrote: > Hi ! > > I run into unexpected problem upgrading oVirt node (installed manually on CentOS): > This problem have to be fixed manually otherwise upgrade command from host engine also fail. > > -> glusterfs-rdma = 3.12.5-2.el7 > was installed manually as a dependency resolution for ovirt-host-4.2.1-1.el7.centos.x86_64 > > Q: How to get around this problem? Thanks in advance. > > > Error: Package: ovirt-host-4.2.1-1.el7.centos.x86_64 (ovirt-4.2) > Requires: glusterfs-rdma > Removing: glusterfs-rdma-3.12.5-2.el7.x86_64 (@ovirt-4.2-centos-gluster312) > glusterfs-rdma = 3.12.5-2.el7 > Obsoleted By: mlnx-ofa_kernel-3.4-OFED.3.4.2.1.5.1.ged26eb5.1.rhel7u3.x86_64 (HP-spp) > Not found [cut] Try with yum clean all and then upgrade. And disable HP-spp if you don't them need now. Luca -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From andreil1 at starlett.lv Wed Feb 14 15:39:05 2018 From: andreil1 at starlett.lv (andreil1 at starlett.lv) Date: Wed, 14 Feb 2018 17:39:05 +0200 Subject: [ovirt-users] Q: Upgrade 4.2 -> 4.2.1 Dependency Problem In-Reply-To: References: <77FC6D61-31BE-4E50-A92E-FC3CBD931A67@starlett.lv> Message-ID: <066A38C1-24DD-4FB7-8F68-974DEFBF807C@starlett.lv> > On 14 Feb 2018, at 17:31, Luca 'remix_tj' Lorenzetto wrote: > > On Wed, Feb 14, 2018 at 4:26 PM, Andrei V wrote: >> Hi ! >> >> I run into unexpected problem upgrading oVirt node (installed manually on CentOS): >> This problem have to be fixed manually otherwise upgrade command from host engine also fail. >> >> -> glusterfs-rdma = 3.12.5-2.el7 >> was installed manually as a dependency resolution for ovirt-host-4.2.1-1.el7.centos.x86_64 >> >> Q: How to get around this problem? Thanks in advance. >> >> >> Error: Package: ovirt-host-4.2.1-1.el7.centos.x86_64 (ovirt-4.2) >> Requires: glusterfs-rdma >> Removing: glusterfs-rdma-3.12.5-2.el7.x86_64 (@ovirt-4.2-centos-gluster312) >> glusterfs-rdma = 3.12.5-2.el7 >> Obsoleted By: mlnx-ofa_kernel-3.4-OFED.3.4.2.1.5.1.ged26eb5.1.rhel7u3.x86_64 (HP-spp) >> Not found > [cut] > > Try with yum clean all and then upgrade. And disable HP-spp if you > don't them need now. > > > Luca "yum clean all" did before posting on oVirt list, same problem. However, disabling HP-spp repository did the trick, thanks a lot ! From edo7411 at gmail.com Wed Feb 14 17:38:32 2018 From: edo7411 at gmail.com (Edoardo Mazza) Date: Wed, 14 Feb 2018 18:38:32 +0100 Subject: [ovirt-users] ovirt 4.2 gluster configuration Message-ID: Hi all, Scenario: 3 nodes each with 3 interfaces: 1 for management, 1 for gluster, 1 for VMs Management interface has it own name and its own ip (es. name = ov1, ip= 192.168.1.1/24), the same is for gluster interface which has its own name and its own ip (es. name = gluster1, ip= 192.168.2.1/24). When configuring bricks from Ovirt Management tools I get the error: "no uuid for the name ov1". Network for gluster communication has been defined on network/interface gluster1. What's wrong with this configuration? Thanks in advance. Edoardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex at unix1337.com Wed Feb 14 19:20:01 2018 From: Alex at unix1337.com (Alex Bartonek) Date: Wed, 14 Feb 2018 14:20:01 -0500 Subject: [ovirt-users] Unable to connect to the graphic server In-Reply-To: References: Message-ID: -------- Original Message -------- On February 14, 2018 2:23 AM, Yedidyah Bar David wrote: >On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek Alex at unix1337.com wrote: >>I've built and rebuilt about 4 oVirt servers. Consider myself pretty good >> at this. LOL. >> So I am setting up a oVirt server for a friend on his r710. CentOS 7, ovirt >> 4.2. /etc/hosts has the correct IP and FQDN setup. >>When I build a VM and try to open a console session via SPICE I am unable >> to connect to the graphic server. I'm connecting from a Windows 10 box. >> Using virt-manager to connect. >> > What happens when you try? > Unable to connect to the graphic console is what the error says. Here is the .vv file other than the cert stuff in it: [virt-viewer] type=spice host=192.168.1.83 port=-1 password= # Password is valid for 120 seconds. delete-this-file=1 fullscreen=0 title=Win_7_32bit:%d toggle-fullscreen=shift+f11 release-cursor=shift+f12 tls-port=5900 enable-smartcard=0 enable-usb-autoshare=1 usb-filter=-1,-1,-1,-1,0 tls-ciphers=DEFAULT host-subject=O=williams.com,CN=randb.williams.com Port 5900 is listening by IP on the server, so that looks correct. I shut the firewall off just in case it was the issue..no go. From zend0 at ya.ru Wed Feb 14 21:12:57 2018 From: zend0 at ya.ru (Dmitry Semenov) Date: Thu, 15 Feb 2018 00:12:57 +0300 Subject: [ovirt-users] Virtual networks in oVirt 4.2 and MTU 1500 Message-ID: <671401518642777@web22o.yandex.ru> I have a not big cluster on oVirt 4.2. Each node has a bond, that has several vlans in its turn. I use virtual networks OVN (External Provider -> ovirt-provider-ovn). While testing I have noticed that in virtual network MTU must be less 1500, so my question is may I change something in network or in bond in order everything in virtual network works correctly with MTU 1500? Below link with my settings: https://pastebin.com/F7ssCVFa --? Best regards From jlawrence at squaretrade.com Thu Feb 15 00:08:43 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Wed, 14 Feb 2018 16:08:43 -0800 Subject: [ovirt-users] hosted engine install fails on useless DHCP lookup In-Reply-To: References: <71A2C531-BFC4-41AD-B7CD-CA41D5AA4D29@squaretrade.com> Message-ID: <4A78D9CB-3B48-40F6-AA3C-FC94BE1AA1F1@squaretrade.com> > On Feb 14, 2018, at 1:27 AM, Simone Tiraboschi wrote: > On Wed, Feb 14, 2018 at 2:11 AM, Jamie Lawrence wrote: > Hello, > > I'm seeing the hosted engine install fail on an Ansible playbook step. Log below. I tried looking at the file specified for retry, below (/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry); it contains the word, 'localhost'. > > The log below didn't contain anything I could see that was actionable; given that it was an ansible error, I hunted down the config and enabled logging. On this run the error was different - the installer log was the same, but the reported error (from the installer changed). > > The first time, the installer said: > > [ INFO ] TASK [Wait for the host to become non operational] > [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": {"ovirt_hosts": []}, "attempts": 150, "changed": false} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook > [ INFO ] Stage: Clean up > > 'localhost' here is not an issue by itself: the playbook is executed on the host against the same host over a local connection so localhost is absolutely fine there. > > Maybe you hit this one: > https://bugzilla.redhat.com/show_bug.cgi?id=1540451 That seems likely. > It seams NetworkManager related but still not that clear. > Stopping NetworkManager and starting network before the deployment seams to help. Tried this, got the same results. [snip] > Anyone see what is wrong here? > > This is absolutely fine. > The new ansible based flow (also called node zero) uses an engine running on a local virtual machine to bootstrap the system. > The bootstrap local VM runs over libvirt default natted network with its own dhcp instance, that's why we are consuming it. > The locally running engine will create a target virtual machine on the shared storage and that one will be instead configured as you specified. Thanks for the context - that's useful, and presumably explains why 192.168 addresses (which we don't use) are appearing in the logs. Not being entirely sure where to go from here, I guess I'll spend the evening figuring out ansible-ese in order to try to figure out why it is blowing chunks. Thanks for the note. -j From didi at redhat.com Thu Feb 15 06:52:03 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Thu, 15 Feb 2018 08:52:03 +0200 Subject: [ovirt-users] Unable to connect to the graphic server In-Reply-To: References: Message-ID: On Wed, Feb 14, 2018 at 9:20 PM, Alex Bartonek wrote: > > -------- Original Message -------- > On February 14, 2018 2:23 AM, Yedidyah Bar David wrote: > >>On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek Alex at unix1337.com wrote: >>>I've built and rebuilt about 4 oVirt servers. Consider myself pretty good >>> at this. LOL. >>> So I am setting up a oVirt server for a friend on his r710. CentOS 7, ovirt >>> 4.2. /etc/hosts has the correct IP and FQDN setup. >>>When I build a VM and try to open a console session via SPICE I am unable >>> to connect to the graphic server. I'm connecting from a Windows 10 box. >>> Using virt-manager to connect. >>> >> What happens when you try? >> > > Unable to connect to the graphic console is what the error says. Here is the .vv file other than the cert stuff in it: > > [virt-viewer] > type=spice > host=192.168.1.83 > port=-1 > password= > # Password is valid for 120 seconds. > delete-this-file=1 > fullscreen=0 > title=Win_7_32bit:%d > toggle-fullscreen=shift+f11 > release-cursor=shift+f12 > tls-port=5900 > enable-smartcard=0 > enable-usb-autoshare=1 > usb-filter=-1,-1,-1,-1,0 > tls-ciphers=DEFAULT > host-subject=O=williams.com,CN=randb.williams.com > > > > Port 5900 is listening by IP on the server, so that looks correct. I shut the firewall off just in case it was the issue..no go. Did you verify that you can connect there manually (e.g. with telnet)? Can you run a sniffer on both sides to make sure traffic passes correctly? Can you check vdsm/libvirt logs on the host side? Thanks, -- Didi From rulasmur at gmail.com Thu Feb 15 07:40:36 2018 From: rulasmur at gmail.com (Rulas Mur) Date: Thu, 15 Feb 2018 09:40:36 +0200 Subject: [ovirt-users] Moving Combined Engine & Node to new network. Message-ID: Hi, I setup the host+engine on centos 7 on my home network and everything worked perfectly, However when I connected it to my work network networking failed completely. hostname -I would be blank. lspci does list the hardware nmcli d is empty nmcli con show is empty nmcli device status is empty there is a device in /sys/class/net/ Is there a way to fix this? or do I have to reinstall? On another note, ovirt is amazing! Thanks for the quality product, Rulasmur -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Thu Feb 15 08:38:01 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 15 Feb 2018 09:38:01 +0100 Subject: [ovirt-users] hosted engine install fails on useless DHCP lookup In-Reply-To: <4A78D9CB-3B48-40F6-AA3C-FC94BE1AA1F1@squaretrade.com> References: <71A2C531-BFC4-41AD-B7CD-CA41D5AA4D29@squaretrade.com> <4A78D9CB-3B48-40F6-AA3C-FC94BE1AA1F1@squaretrade.com> Message-ID: On Thu, Feb 15, 2018 at 1:08 AM, Jamie Lawrence wrote: > > On Feb 14, 2018, at 1:27 AM, Simone Tiraboschi > wrote: > > On Wed, Feb 14, 2018 at 2:11 AM, Jamie Lawrence < > jlawrence at squaretrade.com> wrote: > > Hello, > > > > I'm seeing the hosted engine install fail on an Ansible playbook step. > Log below. I tried looking at the file specified for retry, below > (/usr/share/ovirt-hosted-engine-setup/ansible/bootstrap_local_vm.retry); > it contains the word, 'localhost'. > > > > The log below didn't contain anything I could see that was actionable; > given that it was an ansible error, I hunted down the config and enabled > logging. On this run the error was different - the installer log was the > same, but the reported error (from the installer changed). > > > > The first time, the installer said: > > > > [ INFO ] TASK [Wait for the host to become non operational] > > [ ERROR ] fatal: [localhost]: FAILED! => {"ansible_facts": > {"ovirt_hosts": []}, "attempts": 150, "changed": false} > > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > > [ INFO ] Stage: Clean up > > > > 'localhost' here is not an issue by itself: the playbook is executed on > the host against the same host over a local connection so localhost is > absolutely fine there. > > > > Maybe you hit this one: > > https://bugzilla.redhat.com/show_bug.cgi?id=1540451 > > That seems likely. > At the point the engine VM is up but you can reach it only from that host since it's on a natted network. I'd suggest to connect to the engine VM from there and check host-deploy logs. > > > > It seams NetworkManager related but still not that clear. > > Stopping NetworkManager and starting network before the deployment seams > to help. > > Tried this, got the same results. > > [snip] > > Anyone see what is wrong here? > > > > This is absolutely fine. > > The new ansible based flow (also called node zero) uses an engine > running on a local virtual machine to bootstrap the system. > > The bootstrap local VM runs over libvirt default natted network with its > own dhcp instance, that's why we are consuming it. > > The locally running engine will create a target virtual machine on the > shared storage and that one will be instead configured as you specified. > > Thanks for the context - that's useful, and presumably explains why > 192.168 addresses (which we don't use) are appearing in the logs. > > Not being entirely sure where to go from here, I guess I'll spend the > evening figuring out ansible-ese in order to try to figure out why it is > blowing chunks. > > Thanks for the note. > > -j -------------- next part -------------- An HTML attachment was scrubbed... URL: From punaatua.pk at gmail.com Thu Feb 15 09:19:02 2018 From: punaatua.pk at gmail.com (Punaatua PAINT-KOUI) Date: Wed, 14 Feb 2018 23:19:02 -1000 Subject: [ovirt-users] VDSM SSL validity Message-ID: Hi, I setup an hyperconverged solution with 3 nodes, hosted engine on glusterfs. We run this setup in a PCI-DSS environment. According to PCI-DSS requirements, we are required to reduce the validity of any certificate under 39 months. I saw in this link https://www.ovirt.org/develop/release-management/features/infra/pki/ that i can use the option VdsCertificateValidityInYears at engine-config. I'm running ovirt engine 4.2.1 and i checked when i was on 4.2 how to edit the option with engine-config --all and engine-config --list but the option is not listed Am i missing something ? I thing i can regenerate a VDSM certificate with openssl and the CA conf in /etc/pki/ovirt-engine on the hosted-engine but i would rather modifiy the option for future host that I will add. -- ------------------------------------- PAINT-KOUI Punaatua -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Thu Feb 15 10:36:13 2018 From: msteele at telvue.com (Mark Steele) Date: Thu, 15 Feb 2018 05:36:13 -0500 Subject: [ovirt-users] ERROR - some other host already uses IP ###.###.###.### Message-ID: Good morning, We had a storage crash early this morning that messed up a couple of our ovirt hosts. Networking seemed to be the biggest issue. I have decided to remove the bridge information in /etc/sysconfig/network-scripts and ip the nics in order to re-import them into my ovirt installation (I have already removed the hosts). One of the NIC's refuses to come up and is generating the following error: ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Error, some other host (0C:C4:7A:5B:11:5C) already uses address ###.###.###.###. When I ARP on this server, I do not see that Mac address - and none of my other hosts are using it either. I'm not sure where to go next other than completely reinstalling Centos on this server and starting over. Ovirt version is oVirt Engine Version: 3.5.0.1-1.el6 OS version is CentOS Linux release 7.4.1708 (Core) Thank you *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Thu Feb 15 11:06:04 2018 From: msteele at telvue.com (Mark Steele) Date: Thu, 15 Feb 2018 06:06:04 -0500 Subject: [ovirt-users] Unable to put Host into Maintenance mode Message-ID: I have a host that is currently reporting down with NO VM's on it or associated with it. However when I attempt to put it into maintenance mode, I get the following error: Host hv-01 cannot change into maintenance mode - not all Vms have been migrated successfully. Consider manual intervention: stopping/migrating Vms: (User: admin) I am running oVirt Engine Version: 3.5.0.1-1.el6 *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue -------------- next part -------------- An HTML attachment was scrubbed... URL: From reznikov_aa at soskol.com Thu Feb 15 11:33:55 2018 From: reznikov_aa at soskol.com (Reznikov Alexei) Date: Thu, 15 Feb 2018 14:33:55 +0300 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: References: Message-ID: <72ac36bb-fccd-253d-c08a-4a77786a27c0@soskol.com> 15.02.2018 14:06, Mark Steele ?????: > Consider manual intervention vdsClient -s 0 list table on your host and then vdsClient -s 0 destroy vmID Regards, Alex. From alkaplan at redhat.com Thu Feb 15 12:01:00 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Thu, 15 Feb 2018 14:01:00 +0200 Subject: [ovirt-users] Manageiq ovn Message-ID: On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei Nazarenka < aliaksei.nazarenka at gmail.com> wrote: > Error - 1 Minute Ago > undefined method `orchestration_stacks' for # :InfraManager:0x00000007bf9288> - I get this message if I try to create a > network of overts and then try to check the status of the network manager. > It is the same bug. You need to apply the fixes in https://github.com/ManageIQ/manageiq-providers-ovirt/pull/198/files to make it work. The best option is to upgrade your version. > 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka < > aliaksei.nazarenka at gmail.com>: > >> I tried to make changes to the file refresher_ovn_provider.yml - changed >> the passwords, corrected the names of the names, but it was not successful. >> >> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka < >> aliaksei.nazarenka at gmail.com>: >> >>> Hi! >>> I'm use oVirt 4.2.2 + Manageiq gaprindashvili-1.20180125143019_1450f27 >>> After i set this commits (upstream - https://bugzilla.redhat.com/ >>> 1542063) i no saw changes. >>> >>> >>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan : >>> >>>> Hi, >>>> >>>> What version of manageiq you are using? >>>> We had a bug https://bugzilla.redhat.com/1542152 (upstream - >>>> https://bugzilla.redhat.com/1542063) that was fixed in version 5.9.0.20 >>>> >>>> Please let me know it upgrading the version helped you. >>>> >>>> Thanks, >>>> Alona. >>>> >>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei Nazarenka < >>>> aliaksei.nazarenka at gmail.com> wrote: >>>> >>>>> Good afternoon! >>>>> I read your article - https://www.ovirt.org/develop/ >>>>> release-management/features/network/manageiq_ovn/. I have only one >>>>> question: how to create a network or subnet in Manageiq + ovirt 4.2.1. When >>>>> I try to create a network, I need to select a tenant, but there is nothing >>>>> that I could choose. How can it be? >>>>> >>>>> Sincerely. Alexey Nazarenko >>>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Thu Feb 15 13:17:59 2018 From: andreil1 at starlett.lv (Andrei V) Date: Thu, 15 Feb 2018 15:17:59 +0200 Subject: [ovirt-users] Sparsify in 4.2 - where it moved ? Message-ID: <412264FE-9AF0-4058-8153-EE22D7DF52B7@starlett.lv> Hi ! I can?t locate ?Sparsify? disk image command anywhere in oVirt 4.2. Where it have been moved ? Thanks Andrei From alkaplan at redhat.com Thu Feb 15 13:38:23 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Thu, 15 Feb 2018 15:38:23 +0200 Subject: [ovirt-users] Manageiq ovn In-Reply-To: References: Message-ID: Hi Alexey, Please reply to the users list so all the user may enjoy the information. Automatic sync of ovn networks to ovirt was added on version ovirt-engine-4.2.1.3 (https://bugzilla.redhat.com/1511823). If you use lower version you should import the network to ovirt manually (networks tab -> import button). Once the ovn network is imported to ovirt a vnic profile is automatically created to it. In manageiq, you can assign this profile to a vm you provision (provision vm -> network tab -> vlan field). Alona. On Thu, Feb 15, 2018 at 3:20 PM, Aliaksei Nazarenka < aliaksei.nazarenka at gmail.com> wrote: > Big Thank you! This work! But... Networks are created, but I do not see > them in the ovirt manager, but through the ovn-nbctl command, I see all the > networks. And maybe you can tell me how to assign a VM network from > Manageiq? > > 2018-02-15 15:01 GMT+03:00 Alona Kaplan : > >> >> >> On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei Nazarenka < >> aliaksei.nazarenka at gmail.com> wrote: >> >>> Error - 1 Minute Ago >>> undefined method `orchestration_stacks' for >>> # - I get >>> this message if I try to create a network of overts and then try to check >>> the status of the network manager. >>> >> >> It is the same bug. >> You need to apply the fixes in https://github.com/ManageIQ/ma >> nageiq-providers-ovirt/pull/198/files to make it work. >> The best option is to upgrade your version. >> >> >>> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka < >>> aliaksei.nazarenka at gmail.com>: >>> >>>> I tried to make changes to the file refresher_ovn_provider.yml - >>>> changed the passwords, corrected the names of the names, but it was not >>>> successful. >>>> >>>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka < >>>> aliaksei.nazarenka at gmail.com>: >>>> >>>>> Hi! >>>>> I'm use oVirt 4.2.2 + Manageiq gaprindashvili-1.2018012514301 >>>>> 9_1450f27 >>>>> After i set this commits (upstream - https://bugzilla.redhat.com/ >>>>> 1542063) i no saw changes. >>>>> >>>>> >>>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan : >>>>> >>>>>> Hi, >>>>>> >>>>>> What version of manageiq you are using? >>>>>> We had a bug https://bugzilla.redhat.com/1542152 (upstream - >>>>>> https://bugzilla.redhat.com/1542063) that was fixed in version >>>>>> 5.9.0.20 >>>>>> >>>>>> Please let me know it upgrading the version helped you. >>>>>> >>>>>> Thanks, >>>>>> Alona. >>>>>> >>>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei Nazarenka < >>>>>> aliaksei.nazarenka at gmail.com> wrote: >>>>>> >>>>>>> Good afternoon! >>>>>>> I read your article - https://www.ovirt.org/develop/ >>>>>>> release-management/features/network/manageiq_ovn/. I have only one >>>>>>> question: how to create a network or subnet in Manageiq + ovirt 4.2.1. When >>>>>>> I try to create a network, I need to select a tenant, but there is nothing >>>>>>> that I could choose. How can it be? >>>>>>> >>>>>>> Sincerely. Alexey Nazarenko >>>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alkaplan at redhat.com Thu Feb 15 13:40:14 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Thu, 15 Feb 2018 15:40:14 +0200 Subject: [ovirt-users] Manageiq ovn In-Reply-To: References: Message-ID: On Thu, Feb 15, 2018 at 3:36 PM, Aliaksei Nazarenka < aliaksei.nazarenka at gmail.com> wrote: > when i try to create network router, i see this message: *Unable to > create Network Router "test_router": undefined method `[]' for nil:NilClass* > What ovn-provider version you're using? Can you please attach the ovn provider log ( /var/log/ovirt-provider-ovn.log)? > > 2018-02-15 16:20 GMT+03:00 Aliaksei Nazarenka < > aliaksei.nazarenka at gmail.com>: > >> Big Thank you! This work! But... Networks are created, but I do not see >> them in the ovirt manager, but through the ovn-nbctl command, I see all the >> networks. And maybe you can tell me how to assign a VM network from >> Manageiq? >> >> 2018-02-15 15:01 GMT+03:00 Alona Kaplan : >> >>> >>> >>> On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei Nazarenka < >>> aliaksei.nazarenka at gmail.com> wrote: >>> >>>> Error - 1 Minute Ago >>>> undefined method `orchestration_stacks' for >>>> # - I get >>>> this message if I try to create a network of overts and then try to check >>>> the status of the network manager. >>>> >>> >>> It is the same bug. >>> You need to apply the fixes in https://github.com/ManageIQ/ma >>> nageiq-providers-ovirt/pull/198/files to make it work. >>> The best option is to upgrade your version. >>> >>> >>>> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka < >>>> aliaksei.nazarenka at gmail.com>: >>>> >>>>> I tried to make changes to the file refresher_ovn_provider.yml - >>>>> changed the passwords, corrected the names of the names, but it was not >>>>> successful. >>>>> >>>>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka < >>>>> aliaksei.nazarenka at gmail.com>: >>>>> >>>>>> Hi! >>>>>> I'm use oVirt 4.2.2 + Manageiq gaprindashvili-1.2018012514301 >>>>>> 9_1450f27 >>>>>> After i set this commits (upstream - https://bugzilla.redhat.com/ >>>>>> 1542063) i no saw changes. >>>>>> >>>>>> >>>>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan : >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> What version of manageiq you are using? >>>>>>> We had a bug https://bugzilla.redhat.com/1542152 (upstream - >>>>>>> https://bugzilla.redhat.com/1542063) that was fixed in version >>>>>>> 5.9.0.20 >>>>>>> >>>>>>> Please let me know it upgrading the version helped you. >>>>>>> >>>>>>> Thanks, >>>>>>> Alona. >>>>>>> >>>>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei Nazarenka < >>>>>>> aliaksei.nazarenka at gmail.com> wrote: >>>>>>> >>>>>>>> Good afternoon! >>>>>>>> I read your article - https://www.ovirt.org/develop/ >>>>>>>> release-management/features/network/manageiq_ovn/. I have only one >>>>>>>> question: how to create a network or subnet in Manageiq + ovirt 4.2.1. When >>>>>>>> I try to create a network, I need to select a tenant, but there is nothing >>>>>>>> that I could choose. How can it be? >>>>>>>> >>>>>>>> Sincerely. Alexey Nazarenko >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Thu Feb 15 13:48:55 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Thu, 15 Feb 2018 13:48:55 +0000 Subject: [ovirt-users] Console button greyed out (4.2) Message-ID: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> Hi, We upgraded one of our infrastructures to 4.2.0 recently and since then some of our machines have the "Console" button greyed-out in the Admin UI, like they were disabled. I changed their compatibility to 4.2 but with no luck, as they're still disabled. Is there a way to know why is that, and how to solve it? I'm attaching a screenshot. Thanks. -------------- next part -------------- A non-text attachment was scrubbed... Name: Captura de pantalla de 2018-02-15 13-47-13.png Type: image/png Size: 1280 bytes Desc: not available URL: From alkaplan at redhat.com Thu Feb 15 14:10:14 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Thu, 15 Feb 2018 16:10:14 +0200 Subject: [ovirt-users] Manageiq ovn In-Reply-To: References: Message-ID: On Thu, Feb 15, 2018 at 4:03 PM, Aliaksei Nazarenka < aliaksei.nazarenka at gmail.com> wrote: > and how i can change network in the created VM? > It is not possible via manageiq. Only via ovirt. > > Sorry for my intrusive questions))) > > 2018-02-15 16:51 GMT+03:00 Aliaksei Nazarenka < > aliaksei.nazarenka at gmail.com>: > >> ovirt-provider-ovn-1.2.7-0.20180213232754.gitebd60ad.el7.centos.noarch >> on hosted-engine >> ovirt-provider-ovn-driver-1.2.5-1.el7.centos.noarch on ovirt hosts >> >> 2018-02-15 16:40 GMT+03:00 Alona Kaplan : >> >>> >>> >>> On Thu, Feb 15, 2018 at 3:36 PM, Aliaksei Nazarenka < >>> aliaksei.nazarenka at gmail.com> wrote: >>> >>>> when i try to create network router, i see this message: *Unable to >>>> create Network Router "test_router": undefined method `[]' for nil:NilClass* >>>> >>> >>> What ovn-provider version you're using? Can you please attach the ovn >>> provider log ( /var/log/ovirt-provider-ovn.log)? >>> >>> >>>> >>>> 2018-02-15 16:20 GMT+03:00 Aliaksei Nazarenka < >>>> aliaksei.nazarenka at gmail.com>: >>>> >>>>> Big Thank you! This work! But... Networks are created, but I do not >>>>> see them in the ovirt manager, but through the ovn-nbctl command, I see all >>>>> the networks. And maybe you can tell me how to assign a VM network from >>>>> Manageiq? >>>>> >>>>> 2018-02-15 15:01 GMT+03:00 Alona Kaplan : >>>>> >>>>>> >>>>>> >>>>>> On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei Nazarenka < >>>>>> aliaksei.nazarenka at gmail.com> wrote: >>>>>> >>>>>>> Error - 1 Minute Ago >>>>>>> undefined method `orchestration_stacks' for >>>>>>> # - I >>>>>>> get this message if I try to create a network of overts and then try to >>>>>>> check the status of the network manager. >>>>>>> >>>>>> >>>>>> It is the same bug. >>>>>> You need to apply the fixes in https://github.com/ManageIQ/ma >>>>>> nageiq-providers-ovirt/pull/198/files to make it work. >>>>>> The best option is to upgrade your version. >>>>>> >>>>>> >>>>>>> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka < >>>>>>> aliaksei.nazarenka at gmail.com>: >>>>>>> >>>>>>>> I tried to make changes to the file refresher_ovn_provider.yml - >>>>>>>> changed the passwords, corrected the names of the names, but it was not >>>>>>>> successful. >>>>>>>> >>>>>>>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka < >>>>>>>> aliaksei.nazarenka at gmail.com>: >>>>>>>> >>>>>>>>> Hi! >>>>>>>>> I'm use oVirt 4.2.2 + Manageiq gaprindashvili-1.2018012514301 >>>>>>>>> 9_1450f27 >>>>>>>>> After i set this commits (upstream - https://bugzilla.redhat.com/ >>>>>>>>> 1542063) i no saw changes. >>>>>>>>> >>>>>>>>> >>>>>>>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan : >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> What version of manageiq you are using? >>>>>>>>>> We had a bug https://bugzilla.redhat.com/1542152 (upstream - >>>>>>>>>> https://bugzilla.redhat.com/1542063) that was fixed in version >>>>>>>>>> 5.9.0.20 >>>>>>>>>> >>>>>>>>>> Please let me know it upgrading the version helped you. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Alona. >>>>>>>>>> >>>>>>>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei Nazarenka < >>>>>>>>>> aliaksei.nazarenka at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Good afternoon! >>>>>>>>>>> I read your article - https://www.ovirt.org/develop/ >>>>>>>>>>> release-management/features/network/manageiq_ovn/. I have only >>>>>>>>>>> one question: how to create a network or subnet in Manageiq + ovirt 4.2.1. >>>>>>>>>>> When I try to create a network, I need to select a tenant, but there is >>>>>>>>>>> nothing that I could choose. How can it be? >>>>>>>>>>> >>>>>>>>>>> Sincerely. Alexey Nazarenko >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jtt77777 at gmail.com Thu Feb 15 14:58:26 2018 From: jtt77777 at gmail.com (John Taylor) Date: Thu, 15 Feb 2018 09:58:26 -0500 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> Message-ID: Hi Nicolas, I had the same problem and it looked like it was because of some older vms (I believe from 3.6) that were configured with console with video type of CIRRUS and protocol VNC. Tracing it out it showed that the vm libvirt xml was begin set to headless. I tried different settings but the only thing that seemed to work was to set them to headless, then reopen config and set them to something else. -John On Thu, Feb 15, 2018 at 8:48 AM, wrote: > Hi, > > We upgraded one of our infrastructures to 4.2.0 recently and since then some > of our machines have the "Console" button greyed-out in the Admin UI, like > they were disabled. > > I changed their compatibility to 4.2 but with no luck, as they're still > disabled. > > Is there a way to know why is that, and how to solve it? > > I'm attaching a screenshot. > > Thanks. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From kuko at canarytek.com Thu Feb 15 14:54:01 2018 From: kuko at canarytek.com (Kuko Armas) Date: Thu, 15 Feb 2018 14:54:01 +0000 (WET) Subject: [ovirt-users] hosted-engine deploy 4.2.1 fails when ovirtmgmt is defined on vlan subinterface Message-ID: <575869960.60229.1518706441040.JavaMail.zimbra@canarytek.com> I'm not sure if I should submit a bug report about this, so I ask around here first... I've found a bug that "seems" related but I think it's not (https://bugzilla.redhat.com/show_bug.cgi?id=1523661) This is the problem: - I'm trying to do a clean HE deploy with oVirt 4.2.1 on a clean CentOS 7.4 host - I have a LACP bond (bond0) and I need my management network to be on vlan 1005, si I have created interface bond0.1005 on the host and everything works - I run hosted-engine deploy, and it always fails with [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ print $9 }'", "delta": "0:00:00.006473", "end": "2018-02-15 13:57:11.132359", "rc": 0, "start": "2018-02-15 13:57:11.125886", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []} [ ERROR ] Failed to execute stage 'Closing up': Failed executing ansible-playbook - Looking at the ansible playbook, I see it's trying to look for an ip rule using a custom routing table, but I have no such rule [root at ovirt1 ~]# ip rule 0: from all lookup local 32766: from all lookup main 32767: from all lookup default - I also find that I have no "ovirtmgmt" bridge bridge name bridge id STP enabled interfaces ;vdsmdummy; 8000.000000000000 no virbr0 8000.525400e6ca97 yes virbr0-nic vnet0 - But I haven't found any reference in the ansible playbook to this network creation. - The HE VM gets created and I can connect with SSH, so I tried to find out if the ovirtmgmt network is created via vdsm from the engine - Looking at the engine.log I found this: 2018-02-15 13:49:26,850Z INFO [org.ovirt.engine.core.bll.host.HostConnectivityChecker] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Engine managed to communicate wi th VDSM agent on host 'ovirt1' with address 'ovirt1' ('06651b32-4ef8-4b5d-ab2d-c38e84c2d790') 2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] EVENT_ID: VLAN_ID_ MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION(1,119), Failed to configure management network on host ovirt1. Host ovirt1 has an interface bond0.1005 for the management netwo rk configuration with VLAN-ID (1005), which is different from data-center definition (none). 2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Exception: org.ovirt.eng ine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: Failed to configure management network - So I guess that the engine tried to create the ovirtmgmt bridge on the host via vdsm, but it failed because "Host ovirt1 has an interface bond0.1005 for the management netwo rk configuration with VLAN-ID (1005), which is different from data-center definition (none)" - Of course I haven't had the opportunity to setup the management network's vlan in the datacenter yet, because I'm still trying to deploy the Hosted Engine Is this a supported configuration? Is there a way I can tell the datacenter that the management network is on vlan 1005? Should I file a bug report? Is there a workaround? Salu2! -- Miguel Armas CanaryTek Consultoria y Sistemas SL http://www.canarytek.com/ From lveyde at redhat.com Thu Feb 15 15:06:09 2018 From: lveyde at redhat.com (Lev Veyde) Date: Thu, 15 Feb 2018 17:06:09 +0200 Subject: [ovirt-users] [ANN] oVirt 4.2.2 First Release Candidate is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.2 First Release Candidate, as of February 15th, 2018 This update is a release candidate of the second in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production. This release is available now for: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is already available - oVirt Node will be available soon [2] Additional Resources: * Read more about the oVirt 4.2.2 release highlights: http://www.ovirt.org/release/4. 2 . 2 / * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4. 2 . 2 / [2] http://resources.ovirt.org/pub/ovirt-4. 2-pre /iso/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jas at cse.yorku.ca Thu Feb 15 15:37:48 2018 From: jas at cse.yorku.ca (Jason Keltz) Date: Thu, 15 Feb 2018 10:37:48 -0500 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> Message-ID: On 02/15/2018 08:48 AM, nicolas at devels.es wrote: > Hi, > > We upgraded one of our infrastructures to 4.2.0 recently and since > then some of our machines have the "Console" button greyed-out in the > Admin UI, like they were disabled. > > I changed their compatibility to 4.2 but with no luck, as they're > still disabled. > > Is there a way to know why is that, and how to solve it? > > I'm attaching a screenshot. Hi Nicolas. I had the same problem with most of my VMs after the upgrade from 4.1 to 4.2. See bugzilla here: https://bugzilla.redhat.com/show_bug.cgi?id=1528868 (which admittedly was a mesh of a bunch of different issues that occurred) Red Hat was never really able to figure out why, and I think they pretty much just dropped the issue because it seemed like it only happened for me. In order to resolve it, I had to delete the VMs (not the disk of course), and recreate them, and then I got the console option back. It's "good" to see that it's not just me that had this problem. There's a bug to be found there somewhere!! Jason. From tadavis at lbl.gov Thu Feb 15 16:20:22 2018 From: tadavis at lbl.gov (Thomas Davis) Date: Thu, 15 Feb 2018 08:20:22 -0800 Subject: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node.. In-Reply-To: References: Message-ID: <94882e6e-6ef8-cd72-0565-2e6c127c36fc@lbl.gov> In playing with this, I found that 4.2.1 hosted-engine will not install on a node with the ovirtmgmt interface being a vlan. Is this a still supported config? I see that access port, bonded and vlan tagged are supported by older versions.. thomas On 02/05/2018 08:16 AM, Simone Tiraboschi wrote: > > > On Fri, Feb 2, 2018 at 9:10 PM, Thomas Davis > wrote: > > Is this supported? > > I have a node, that centos 7.4 minimal is installed on, with an > interface setup for an IP address. > > I've yum installed nothing else except the ovirt-4.2.1-pre rpm, run > screen, and then do the 'hosted-engine --deploy' command. > > > Fine, nothing else is required. > > > It hangs on: > > [ INFO? ] changed: [localhost] > [ INFO? ] TASK [Get ovirtmgmt route table id] > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": > true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ > //g | awk '{ print $9 }'", "delta": "0:00:00.004845", "end": > "2018-02-02 12:03:30.794860", "rc": 0, "start": "2018-02-02 > 12:03:30.790015", "stderr": "", "stderr_lines": [], "stdout": "", > "stdout_lines": []} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > [ INFO? ] Stage: Clean up > [ INFO? ] Cleaning temporary resources > [ INFO? ] TASK [Gathering Facts] > [ INFO? ] ok: [localhost] > [ INFO? ] TASK [Remove local vm dir] > [ INFO? ] ok: [localhost] > [ INFO? ] Generating answer file > '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180202120333.conf' > [ INFO? ] Stage: Pre-termination > [ INFO? ] Stage: Termination > [ ERROR ] Hosted Engine deployment failed: please check the logs for > the issue, fix accordingly or re-deploy from scratch. > ? ? ? ? ? Log file is located at > /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20180202115038-r11nh1.log > > but the VM is up and running, just attached to the 192.168.122.0/24 > subnet > > [root at d8-r13-c2-n1 ~]# ssh root at 192.168.122.37 > > root at 192.168.122.37 's password: > Last login: Fri Feb? 2 11:54:47 2018 from 192.168.122.1 > [root at ovirt ~]# systemctl status ovirt-engine > ? ovirt-engine.service - oVirt Engine > ? ?Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; > enabled; vendor preset: disabled) > ? ?Active: active (running) since Fri 2018-02-02 11:54:42 PST; > 11min ago > ?Main PID: 24724 (ovirt-engine.py) > ? ?CGroup: /system.slice/ovirt-engine.service > ? ? ? ? ? ???24724 /usr/bin/python > /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py > --redirect-output --systemd=notify start > ? ? ? ? ? ???24856 ovirt-engine -server -XX:+TieredCompilation > -Xms3971M -Xmx3971M -Djava.awt.headless=true > -Dsun.rmi.dgc.client.gcInterval=3600000 > -Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse... > > Feb 02 11:54:41 ovirt.crt.nersc.gov > systemd[1]: Starting oVirt Engine... > Feb 02 11:54:41 ovirt.crt.nersc.gov > ovirt-engine.py[24724]: 2018-02-02 11:54:41,767-0800 ovirt-engine: > INFO _detectJBossVersion:187 Detecting JBoss version. Running: > /usr/lib/jvm/jre/...600000', '- > Feb 02 11:54:42 ovirt.crt.nersc.gov > ovirt-engine.py[24724]: 2018-02-02 11:54:42,394-0800 ovirt-engine: > INFO _detectJBossVersion:207 Return code: 0,? | stdout: '[u'WildFly > Full 11.0.0....tderr: '[]' > Feb 02 11:54:42 ovirt.crt.nersc.gov > systemd[1]: Started oVirt Engine. > Feb 02 11:55:25 ovirt.crt.nersc.gov > python2[25640]: ansible-stat Invoked with checksum_algorithm=sha1 > get_checksum=True follow=False > path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True > Feb 02 11:55:29 ovirt.crt.nersc.gov > python2[25698]: ansible-stat Invoked with checksum_algorithm=sha1 > get_checksum=True follow=False > path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True > Feb 02 11:55:30 ovirt.crt.nersc.gov > python2[25741]: ansible-stat Invoked with checksum_algorithm=sha1 > get_checksum=True follow=False > path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True > Feb 02 11:55:30 ovirt.crt.nersc.gov > python2[25767]: ansible-stat Invoked with checksum_algorithm=sha1 > get_checksum=True follow=False > path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True > Feb 02 11:55:31 ovirt.crt.nersc.gov > python2[25795]: ansible-stat Invoked with checksum_algorithm=sha1 > get_checksum=True follow=False > path=/etc/ovirt-engine-metrics/config.yml get_md5...ributes=True > > The 'ip rule list' never has an ovirtmgmt rule/table in it.. which > means the ansible script loops then dies; vdsmd has never configured > the network on the node. > > > Right. > Can you please attach engine.log and host-deploy from the engine VM? > > > [root at d8-r13-c2-n1 ~]# systemctl status vdsmd -l > ? vdsmd.service - Virtual Desktop Server Manager > ? ?Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; > vendor preset: enabled) > ? ?Active: active (running) since Fri 2018-02-02 11:55:11 PST; > 14min ago > ?Main PID: 7654 (vdsmd) > ? ?CGroup: /system.slice/vdsmd.service > ? ? ? ? ? ???7654 /usr/bin/python2 /usr/share/vdsm/vdsmd > > Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: > Running dummybr > Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: > Running tune_system > Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: > Running test_space > Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: > Running test_lo > Feb 02 11:55:11 d8-r13-c2-n1 systemd[1]: Started Virtual Desktop > Server Manager. > Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN File: > /var/run/vdsm/trackedInterfaces/vnet0 already removed > Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN Not ready yet, > ignoring event > '|virt|VM_status|ba56a114-efb0-45e0-b2ad-808805ae93e0' > args={'ba56a114-efb0-45e0-b2ad-808805ae93e0': {'status': 'Powering > up', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '127.0.0.1', > 'type': 'vnc', 'port': '5900'}], 'hash': '5328187475809024041', > 'cpuUser': '0.00', 'monitorResponse': '0', 'elapsedTime': '0', > 'cpuSys': '0.00', 'vcpuPeriod': 100000L, 'timeOffset': '0', > 'clientIp': '', 'pauseCode': 'NOERR', 'vcpuQuota': '-1'}} > Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available. > Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available, KSM > stats will be missing. > Feb 02 11:55:17 d8-r13-c2-n1 vdsm[7654]: WARN ping was deprecated in > favor of ping2 and confirmConnectivity > > Do I need to install a complete ovirt-engine on the node first, > bring the node into ovirt, then bring up hosted-engine?? I'd like to > avoid this and just go straight to hosted-engine setup. > > thomas > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > From michal.skrivanek at redhat.com Thu Feb 15 16:54:59 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 15 Feb 2018 17:54:59 +0100 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> Message-ID: <016BDB70-B461-44C0-B1FD-30FBC403A96B@redhat.com> > On 15 Feb 2018, at 15:58, John Taylor wrote: > > Hi Nicolas, > I had the same problem and it looked like it was because of some older > vms (I believe from 3.6) that were configured with console with video > type of CIRRUS and protocol VNC. 3.6 had cirrus indeed. That should work. Can you somehow confirm it was really a 3.6 VM and it stopped working in 4.2? The exact steps are important, unfortunately. > Tracing it out it showed that the vm libvirt xml was begin set to > headless. do you at least recall what cluster level version it was when it stopped working? The VM definition should have been changed to VGA when you move the VM from 3.6 cluster to a 4.0+ > I tried different settings but the only thing that seemed > to work was to set them to headless, then reopen config and set them > to something else. > > -John > > On Thu, Feb 15, 2018 at 8:48 AM, wrote: >> Hi, >> >> We upgraded one of our infrastructures to 4.2.0 recently and since then some >> of our machines have the "Console" button greyed-out in the Admin UI, like >> they were disabled. >> >> I changed their compatibility to 4.2 but with no luck, as they're still >> disabled. >> >> Is there a way to know why is that, and how to solve it? >> >> I'm attaching a screenshot. >> >> Thanks. >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > From stirabos at redhat.com Thu Feb 15 16:56:23 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 15 Feb 2018 17:56:23 +0100 Subject: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node.. In-Reply-To: <94882e6e-6ef8-cd72-0565-2e6c127c36fc@lbl.gov> References: <94882e6e-6ef8-cd72-0565-2e6c127c36fc@lbl.gov> Message-ID: On Thu, Feb 15, 2018 at 5:20 PM, Thomas Davis wrote: > In playing with this, I found that 4.2.1 hosted-engine will not install on > a node with the ovirtmgmt interface being a vlan. > Can you please attach vdsm and host-deploy logs? > Is this a still supported config? I see that access port, bonded and vlan > tagged are supported by older versions.. > Yes, absolutely: if not VLAN doens't work it's definitively an issue. Adding Ido here. > > thomas > > On 02/05/2018 08:16 AM, Simone Tiraboschi wrote: > >> >> >> On Fri, Feb 2, 2018 at 9:10 PM, Thomas Davis > tadavis at lbl.gov>> wrote: >> >> Is this supported? >> >> I have a node, that centos 7.4 minimal is installed on, with an >> interface setup for an IP address. >> >> I've yum installed nothing else except the ovirt-4.2.1-pre rpm, run >> screen, and then do the 'hosted-engine --deploy' command. >> >> >> Fine, nothing else is required. >> >> >> It hangs on: >> >> [ INFO ] changed: [localhost] >> [ INFO ] TASK [Get ovirtmgmt route table id] >> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": >> true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ >> //g | awk '{ print $9 }'", "delta": "0:00:00.004845", "end": >> "2018-02-02 12:03:30.794860", "rc": 0, "start": "2018-02-02 >> 12:03:30.790015", "stderr": "", "stderr_lines": [], "stdout": "", >> "stdout_lines": []} >> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >> ansible-playbook >> [ INFO ] Stage: Clean up >> [ INFO ] Cleaning temporary resources >> [ INFO ] TASK [Gathering Facts] >> [ INFO ] ok: [localhost] >> [ INFO ] TASK [Remove local vm dir] >> [ INFO ] ok: [localhost] >> [ INFO ] Generating answer file >> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180202 >> 120333.conf' >> [ INFO ] Stage: Pre-termination >> [ INFO ] Stage: Termination >> [ ERROR ] Hosted Engine deployment failed: please check the logs for >> the issue, fix accordingly or re-deploy from scratch. >> Log file is located at >> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup >> -20180202115038-r11nh1.log >> >> but the VM is up and running, just attached to the 192.168.122.0/24 >> subnet >> >> [root at d8-r13-c2-n1 ~]# ssh root at 192.168.122.37 >> >> root at 192.168.122.37 's password: >> Last login: Fri Feb 2 11:54:47 2018 from 192.168.122.1 >> [root at ovirt ~]# systemctl status ovirt-engine >> ? ovirt-engine.service - oVirt Engine >> Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; >> enabled; vendor preset: disabled) >> Active: active (running) since Fri 2018-02-02 11:54:42 PST; >> 11min ago >> Main PID: 24724 (ovirt-engine.py) >> CGroup: /system.slice/ovirt-engine.service >> ??24724 /usr/bin/python >> /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py >> --redirect-output --systemd=notify start >> ??24856 ovirt-engine -server -XX:+TieredCompilation >> -Xms3971M -Xmx3971M -Djava.awt.headless=true >> -Dsun.rmi.dgc.client.gcInterval=3600000 >> -Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse... >> >> Feb 02 11:54:41 ovirt.crt.nersc.gov >> systemd[1]: Starting oVirt Engine... >> Feb 02 11:54:41 ovirt.crt.nersc.gov >> ovirt-engine.py[24724]: 2018-02-02 11:54:41,767-0800 ovirt-engine: >> INFO _detectJBossVersion:187 Detecting JBoss version. Running: >> /usr/lib/jvm/jre/...600000', '- >> Feb 02 11:54:42 ovirt.crt.nersc.gov >> ovirt-engine.py[24724]: 2018-02-02 11:54:42,394-0800 ovirt-engine: >> INFO _detectJBossVersion:207 Return code: 0, | stdout: '[u'WildFly >> Full 11.0.0....tderr: '[]' >> Feb 02 11:54:42 ovirt.crt.nersc.gov >> systemd[1]: Started oVirt Engine. >> Feb 02 11:55:25 ovirt.crt.nersc.gov >> python2[25640]: ansible-stat Invoked with checksum_algorithm=sha1 >> get_checksum=True follow=False >> path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True >> Feb 02 11:55:29 ovirt.crt.nersc.gov >> python2[25698]: ansible-stat Invoked with checksum_algorithm=sha1 >> get_checksum=True follow=False >> path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True >> Feb 02 11:55:30 ovirt.crt.nersc.gov >> python2[25741]: ansible-stat Invoked with checksum_algorithm=sha1 >> get_checksum=True follow=False >> path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True >> Feb 02 11:55:30 ovirt.crt.nersc.gov >> python2[25767]: ansible-stat Invoked with checksum_algorithm=sha1 >> get_checksum=True follow=False >> path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True >> Feb 02 11:55:31 ovirt.crt.nersc.gov >> >> python2[25795]: ansible-stat Invoked with checksum_algorithm=sha1 >> get_checksum=True follow=False >> path=/etc/ovirt-engine-metrics/config.yml get_md5...ributes=True >> >> The 'ip rule list' never has an ovirtmgmt rule/table in it.. which >> means the ansible script loops then dies; vdsmd has never configured >> the network on the node. >> >> >> Right. >> Can you please attach engine.log and host-deploy from the engine VM? >> >> >> [root at d8-r13-c2-n1 ~]# systemctl status vdsmd -l >> ? vdsmd.service - Virtual Desktop Server Manager >> Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; >> vendor preset: enabled) >> Active: active (running) since Fri 2018-02-02 11:55:11 PST; >> 14min ago >> Main PID: 7654 (vdsmd) >> CGroup: /system.slice/vdsmd.service >> ??7654 /usr/bin/python2 /usr/share/vdsm/vdsmd >> >> Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: >> Running dummybr >> Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: >> Running tune_system >> Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: >> Running test_space >> Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: >> Running test_lo >> Feb 02 11:55:11 d8-r13-c2-n1 systemd[1]: Started Virtual Desktop >> Server Manager. >> Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN File: >> /var/run/vdsm/trackedInterfaces/vnet0 already removed >> Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN Not ready yet, >> ignoring event >> '|virt|VM_status|ba56a114-efb0-45e0-b2ad-808805ae93e0' >> args={'ba56a114-efb0-45e0-b2ad-808805ae93e0': {'status': 'Powering >> up', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '127.0.0.1', >> 'type': 'vnc', 'port': '5900'}], 'hash': '5328187475809024041', >> 'cpuUser': '0.00', 'monitorResponse': '0', 'elapsedTime': '0', >> 'cpuSys': '0.00', 'vcpuPeriod': 100000L, 'timeOffset': '0', >> 'clientIp': '', 'pauseCode': 'NOERR', 'vcpuQuota': '-1'}} >> Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available. >> Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available, KSM >> stats will be missing. >> Feb 02 11:55:17 d8-r13-c2-n1 vdsm[7654]: WARN ping was deprecated in >> favor of ping2 and confirmConnectivity >> >> Do I need to install a complete ovirt-engine on the node first, >> bring the node into ovirt, then bring up hosted-engine? I'd like to >> avoid this and just go straight to hosted-engine setup. >> >> thomas >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Thu Feb 15 16:57:20 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 15 Feb 2018 17:57:20 +0100 Subject: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node.. In-Reply-To: References: <94882e6e-6ef8-cd72-0565-2e6c127c36fc@lbl.gov> Message-ID: On Thu, Feb 15, 2018 at 5:56 PM, Simone Tiraboschi wrote: > > > On Thu, Feb 15, 2018 at 5:20 PM, Thomas Davis wrote: > >> In playing with this, I found that 4.2.1 hosted-engine will not install >> on a node with the ovirtmgmt interface being a vlan. >> > > Can you please attach vdsm and host-deploy logs? > > >> Is this a still supported config? I see that access port, bonded and >> vlan tagged are supported by older versions.. >> > > Yes, absolutely: if not VLAN doens't work it's definitively an issue. > Adding Ido here. > Oh, just another thing. The old flow is still there as a deprecated fallback. You can force it passing --noansible option. > > >> >> thomas >> >> On 02/05/2018 08:16 AM, Simone Tiraboschi wrote: >> >>> >>> >>> On Fri, Feb 2, 2018 at 9:10 PM, Thomas Davis >> tadavis at lbl.gov>> wrote: >>> >>> Is this supported? >>> >>> I have a node, that centos 7.4 minimal is installed on, with an >>> interface setup for an IP address. >>> >>> I've yum installed nothing else except the ovirt-4.2.1-pre rpm, run >>> screen, and then do the 'hosted-engine --deploy' command. >>> >>> >>> Fine, nothing else is required. >>> >>> >>> It hangs on: >>> >>> [ INFO ] changed: [localhost] >>> [ INFO ] TASK [Get ovirtmgmt route table id] >>> [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": >>> true, "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ >>> //g | awk '{ print $9 }'", "delta": "0:00:00.004845", "end": >>> "2018-02-02 12:03:30.794860", "rc": 0, "start": "2018-02-02 >>> 12:03:30.790015", "stderr": "", "stderr_lines": [], "stdout": "", >>> "stdout_lines": []} >>> [ ERROR ] Failed to execute stage 'Closing up': Failed executing >>> ansible-playbook >>> [ INFO ] Stage: Clean up >>> [ INFO ] Cleaning temporary resources >>> [ INFO ] TASK [Gathering Facts] >>> [ INFO ] ok: [localhost] >>> [ INFO ] TASK [Remove local vm dir] >>> [ INFO ] ok: [localhost] >>> [ INFO ] Generating answer file >>> '/var/lib/ovirt-hosted-engine-setup/answers/answers-20180202 >>> 120333.conf' >>> [ INFO ] Stage: Pre-termination >>> [ INFO ] Stage: Termination >>> [ ERROR ] Hosted Engine deployment failed: please check the logs for >>> the issue, fix accordingly or re-deploy from scratch. >>> Log file is located at >>> /var/log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup >>> -20180202115038-r11nh1.log >>> >>> but the VM is up and running, just attached to the 192.168.122.0/24 >>> subnet >>> >>> [root at d8-r13-c2-n1 ~]# ssh root at 192.168.122.37 >>> >>> root at 192.168.122.37 's password: >>> Last login: Fri Feb 2 11:54:47 2018 from 192.168.122.1 >>> [root at ovirt ~]# systemctl status ovirt-engine >>> ? ovirt-engine.service - oVirt Engine >>> Loaded: loaded (/usr/lib/systemd/system/ovirt-engine.service; >>> enabled; vendor preset: disabled) >>> Active: active (running) since Fri 2018-02-02 11:54:42 PST; >>> 11min ago >>> Main PID: 24724 (ovirt-engine.py) >>> CGroup: /system.slice/ovirt-engine.service >>> ??24724 /usr/bin/python >>> /usr/share/ovirt-engine/services/ovirt-engine/ovirt-engine.py >>> --redirect-output --systemd=notify start >>> ??24856 ovirt-engine -server -XX:+TieredCompilation >>> -Xms3971M -Xmx3971M -Djava.awt.headless=true >>> -Dsun.rmi.dgc.client.gcInterval=3600000 >>> -Dsun.rmi.dgc.server.gcInterval=3600000 -Djsse... >>> >>> Feb 02 11:54:41 ovirt.crt.nersc.gov >>> systemd[1]: Starting oVirt Engine... >>> Feb 02 11:54:41 ovirt.crt.nersc.gov >>> ovirt-engine.py[24724]: 2018-02-02 11:54:41,767-0800 ovirt-engine: >>> INFO _detectJBossVersion:187 Detecting JBoss version. Running: >>> /usr/lib/jvm/jre/...600000', '- >>> Feb 02 11:54:42 ovirt.crt.nersc.gov >>> ovirt-engine.py[24724]: 2018-02-02 11:54:42,394-0800 ovirt-engine: >>> INFO _detectJBossVersion:207 Return code: 0, | stdout: '[u'WildFly >>> Full 11.0.0....tderr: '[]' >>> Feb 02 11:54:42 ovirt.crt.nersc.gov >>> systemd[1]: Started oVirt Engine. >>> Feb 02 11:55:25 ovirt.crt.nersc.gov >>> python2[25640]: ansible-stat Invoked with checksum_algorithm=sha1 >>> get_checksum=True follow=False >>> path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True >>> Feb 02 11:55:29 ovirt.crt.nersc.gov >>> python2[25698]: ansible-stat Invoked with checksum_algorithm=sha1 >>> get_checksum=True follow=False >>> path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True >>> Feb 02 11:55:30 ovirt.crt.nersc.gov >>> python2[25741]: ansible-stat Invoked with checksum_algorithm=sha1 >>> get_checksum=True follow=False >>> path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True >>> Feb 02 11:55:30 ovirt.crt.nersc.gov >>> python2[25767]: ansible-stat Invoked with checksum_algorithm=sha1 >>> get_checksum=True follow=False >>> path=/usr/share/ovirt-engine/playbooks/roles/ovir...ributes=True >>> Feb 02 11:55:31 ovirt.crt.nersc.gov >>> >>> python2[25795]: ansible-stat Invoked with checksum_algorithm=sha1 >>> get_checksum=True follow=False >>> path=/etc/ovirt-engine-metrics/config.yml get_md5...ributes=True >>> >>> The 'ip rule list' never has an ovirtmgmt rule/table in it.. which >>> means the ansible script loops then dies; vdsmd has never configured >>> the network on the node. >>> >>> >>> Right. >>> Can you please attach engine.log and host-deploy from the engine VM? >>> >>> >>> [root at d8-r13-c2-n1 ~]# systemctl status vdsmd -l >>> ? vdsmd.service - Virtual Desktop Server Manager >>> Loaded: loaded (/usr/lib/systemd/system/vdsmd.service; enabled; >>> vendor preset: enabled) >>> Active: active (running) since Fri 2018-02-02 11:55:11 PST; >>> 14min ago >>> Main PID: 7654 (vdsmd) >>> CGroup: /system.slice/vdsmd.service >>> ??7654 /usr/bin/python2 /usr/share/vdsm/vdsmd >>> >>> Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: >>> Running dummybr >>> Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: >>> Running tune_system >>> Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: >>> Running test_space >>> Feb 02 11:55:11 d8-r13-c2-n1 vdsmd_init_common.sh[7551]: vdsm: >>> Running test_lo >>> Feb 02 11:55:11 d8-r13-c2-n1 systemd[1]: Started Virtual Desktop >>> Server Manager. >>> Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN File: >>> /var/run/vdsm/trackedInterfaces/vnet0 already removed >>> Feb 02 11:55:12 d8-r13-c2-n1 vdsm[7654]: WARN Not ready yet, >>> ignoring event >>> '|virt|VM_status|ba56a114-efb0-45e0-b2ad-808805ae93e0' >>> args={'ba56a114-efb0-45e0-b2ad-808805ae93e0': {'status': 'Powering >>> up', 'displayInfo': [{'tlsPort': '-1', 'ipAddress': '127.0.0.1', >>> 'type': 'vnc', 'port': '5900'}], 'hash': '5328187475809024041', >>> 'cpuUser': '0.00', 'monitorResponse': '0', 'elapsedTime': '0', >>> 'cpuSys': '0.00', 'vcpuPeriod': 100000L, 'timeOffset': '0', >>> 'clientIp': '', 'pauseCode': 'NOERR', 'vcpuQuota': '-1'}} >>> Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available. >>> Feb 02 11:55:13 d8-r13-c2-n1 vdsm[7654]: WARN MOM not available, KSM >>> stats will be missing. >>> Feb 02 11:55:17 d8-r13-c2-n1 vdsm[7654]: WARN ping was deprecated in >>> favor of ping2 and confirmConnectivity >>> >>> Do I need to install a complete ovirt-engine on the node first, >>> bring the node into ovirt, then bring up hosted-engine? I'd like to >>> avoid this and just go straight to hosted-engine setup. >>> >>> thomas >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Thu Feb 15 17:05:01 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 15 Feb 2018 18:05:01 +0100 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> Message-ID: <2EA3B6C6-C3E8-45D8-8ED4-4DF0AE97D279@redhat.com> > On 15 Feb 2018, at 16:37, Jason Keltz wrote: > > On 02/15/2018 08:48 AM, nicolas at devels.es wrote: >> Hi, >> >> We upgraded one of our infrastructures to 4.2.0 recently and since then some of our machines have the "Console" button greyed-out in the Admin UI, like they were disabled. >> >> I changed their compatibility to 4.2 but with no luck, as they're still disabled. >> >> Is there a way to know why is that, and how to solve it? >> >> I'm attaching a screenshot. > > Hi Nicolas. > I had the same problem with most of my VMs after the upgrade from 4.1 to 4.2. > See bugzilla here: https://bugzilla.redhat.com/show_bug.cgi?id=1528868 > (which admittedly was a mesh of a bunch of different issues that occurred) yeah, that?s not a good idea to mix more issues:) Seems https://bugzilla.redhat.com/show_bug.cgi?id=1528868#c26 is the last one relevant to the grayed out console problem in this email thread. it?s also possible to check "VM Devices? subtab and list the graphical devices. If this is the same problem as from Nicolas then it would list cirrus and it would be great if you can confirm the conditionas are similar (i.e. originally a 3.6 VM) And then - if possible - describe some history of what happened. When was the VM created, when was cluster updated, when the system was upgraded and to what versions. Thanks, michal > Red Hat was never really able to figure out why, and I think they pretty much just dropped the issue because it seemed like it only happened for me. In order to resolve it, I had to delete the VMs (not the disk of course), and recreate them, and then I got the console option back. > It's "good" to see that it's not just me that had this problem. There's a bug to be found there somewhere!! > > Jason. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > From michal.skrivanek at redhat.com Thu Feb 15 17:07:39 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 15 Feb 2018 18:07:39 +0100 Subject: [ovirt-users] Sparsify in 4.2 - where it moved ? In-Reply-To: <412264FE-9AF0-4058-8153-EE22D7DF52B7@starlett.lv> References: <412264FE-9AF0-4058-8153-EE22D7DF52B7@starlett.lv> Message-ID: > On 15 Feb 2018, at 14:17, Andrei V wrote: > > Hi ! > > > I can?t locate ?Sparsify? disk image command anywhere in oVirt 4.2. > Where it have been moved ? good question:) Was it lost in GUI redesign? > > > Thanks > Andrei > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From michal.skrivanek at redhat.com Thu Feb 15 17:10:16 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 15 Feb 2018 18:10:16 +0100 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: References: Message-ID: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> > On 15 Feb 2018, at 12:06, Mark Steele wrote: > > I have a host that is currently reporting down with NO VM's on it or associated with it. However when I attempt to put it into maintenance mode, I get the following error: > > Host hv-01 cannot change into maintenance mode - not all Vms have been migrated successfully. Consider manual intervention: stopping/migrating Vms: (User: admin) > > I am running > oVirt Engine Version: 3.5.0.1-1.el6 that?s a really old version?. first confirm there is no running vm on that host (log in there, look for qemu processes) if not, it?s likely just engine issue, somewhere it lost track of what?s actually running there - in that case you could try to restart the host, restart engine. that should help > > *** > Mark Steele > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From bzlotnik at redhat.com Thu Feb 15 17:25:52 2018 From: bzlotnik at redhat.com (Benny Zlotnik) Date: Thu, 15 Feb 2018 19:25:52 +0200 Subject: [ovirt-users] Sparsify in 4.2 - where it moved ? In-Reply-To: References: <412264FE-9AF0-4058-8153-EE22D7DF52B7@starlett.lv> Message-ID: Under the 3 dots as can be seen in the attached screenshot On Thu, Feb 15, 2018 at 7:07 PM, Michal Skrivanek < michal.skrivanek at redhat.com> wrote: > > > > On 15 Feb 2018, at 14:17, Andrei V wrote: > > > > Hi ! > > > > > > I can?t locate ?Sparsify? disk image command anywhere in oVirt 4.2. > > Where it have been moved ? > > good question:) > Was it lost in GUI redesign? > > > > > > > Thanks > > Andrei > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot-2018-2-15 oVirt Open Virtualization Manager.png Type: image/png Size: 38699 bytes Desc: not available URL: From ccox at endlessnow.com Thu Feb 15 17:34:31 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Thu, 15 Feb 2018 11:34:31 -0600 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> Message-ID: <784ea984-7c14-ad09-1345-aef2ffa00664@endlessnow.com> On 02/15/2018 11:10 AM, Michal Skrivanek wrote: ..snippity... with regards to oVirt 3.5 > > that?s a really old version?. I know I'll catch heat for this, but by "old" you mean like December of 2015? Just trying put things into perspective. Thus it goes with the ancient and decrepit Red Hat Ent. 7.1 days, right? I know, I know, FOSS... the only thing worse than running today's code is running yesterday's. We still run a 3.5 oVirt in our dev lab, btw. But I would not have set that up (not that I would have recommended oVirt to begin with), preferring 3.4 at the time. I would have waited for 3.6. With that said, 3.5 isn't exactly on the "stable line" to Red Hat Virtualization, that was 3.4 and then 3.6. Some people can't afford major (downtime) upgrades every 3-6 months or so. But, arguably, maybe we shouldn't be running oVirt. Maybe it's not designed for "production". I guess oVirt isn't really for production by definition, but many of us are doing so. So... not really a "ding" against oVirt developers, it's just a rapidly moving target with the normal risks that come with that. People just need to understand that. And with that said, the fact that many of us are running those ancient decrepit evil versions of oVirt in production today, is actually a testimony to its quality. Good job devs! From kuko at canarytek.com Thu Feb 15 17:36:46 2018 From: kuko at canarytek.com (Kuko Armas) Date: Thu, 15 Feb 2018 17:36:46 +0000 (WET) Subject: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node.. In-Reply-To: <94882e6e-6ef8-cd72-0565-2e6c127c36fc@lbl.gov> References: <94882e6e-6ef8-cd72-0565-2e6c127c36fc@lbl.gov> Message-ID: <1169820380.61075.1518716206625.JavaMail.zimbra@canarytek.com> > In playing with this, I found that 4.2.1 hosted-engine will not install > on a node with the ovirtmgmt interface being a vlan. > > Is this a still supported config? I see that access port, bonded and > vlan tagged are supported by older versions.. That's **exactly** the same problem I just posted some hours ago ;) Salu2! -- Miguel Armas CanaryTek Consultoria y Sistemas SL http://www.canarytek.com/ From kuko at canarytek.com Thu Feb 15 17:42:59 2018 From: kuko at canarytek.com (Kuko Armas) Date: Thu, 15 Feb 2018 17:42:59 +0000 (WET) Subject: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node.. In-Reply-To: References: <94882e6e-6ef8-cd72-0565-2e6c127c36fc@lbl.gov> Message-ID: <1528253021.61107.1518716579592.JavaMail.zimbra@canarytek.com> > Can you please attach vdsm and host-deploy logs? This is what I saw in engine.log 2018-02-15 13:49:26,850Z INFO [org.ovirt.engine.core.bll.host.HostConnectivityChecker] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Engine managed to communicate wi th VDSM agent on host 'ovirt1' with address 'ovirt1' ('06651b32-4ef8-4b5d-ab2d-c38e84c2d790') 2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] EVENT_ID: VLAN_ID_ MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION(1,119), Failed to configure management network on host ovirt1. Host ovirt1 has an interface bond0.1005 for the management netwo rk configuration with VLAN-ID (1005), which is different from data-center definition (none). 2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Exception: org.ovirt.eng ine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: Failed to configure management network It seems that ovirtmgmt creation is skipper because inconsistency between datacenter setup and host setup. Of course we can not configure the vlan for mgmt network in datacenter because we are still deploying Salu2! -- Miguel Armas CanaryTek Consultoria y Sistemas SL http://www.canarytek.com/ From stirabos at redhat.com Thu Feb 15 17:46:21 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 15 Feb 2018 18:46:21 +0100 Subject: [ovirt-users] hosted-engine deploy 4.2.1 fails when ovirtmgmt is defined on vlan subinterface In-Reply-To: <575869960.60229.1518706441040.JavaMail.zimbra@canarytek.com> References: <575869960.60229.1518706441040.JavaMail.zimbra@canarytek.com> Message-ID: On Thu, Feb 15, 2018 at 3:54 PM, Kuko Armas wrote: > > I'm not sure if I should submit a bug report about this, so I ask around > here first... > I've found a bug that "seems" related but I think it's not ( > https://bugzilla.redhat.com/show_bug.cgi?id=1523661) > > This is the problem: > > - I'm trying to do a clean HE deploy with oVirt 4.2.1 on a clean CentOS > 7.4 host > - I have a LACP bond (bond0) and I need my management network to be on > vlan 1005, si I have created interface bond0.1005 on the host and > everything works > - I run hosted-engine deploy, and it always fails with > > [ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 50, "changed": true, > "cmd": "ip rule list | grep ovirtmgmt | sed s/\\\\[.*\\\\]\\ //g | awk '{ > print $9 }'", "delta": "0:00:00.006473", "end": "2018-02-15 > 13:57:11.132359", "rc": 0, "start": "2018-02-15 13:57:11.125886", "stderr": > "", "stderr_lines": [], "stdout": "", "stdout_lines": []} > [ ERROR ] Failed to execute stage 'Closing up': Failed executing > ansible-playbook > > - Looking at the ansible playbook, I see it's trying to look for an ip > rule using a custom routing table, but I have no such rule > > [root at ovirt1 ~]# ip rule > 0: from all lookup local > 32766: from all lookup main > 32767: from all lookup default > > - I also find that I have no "ovirtmgmt" bridge > > bridge name bridge id STP enabled interfaces > ;vdsmdummy; 8000.000000000000 no > virbr0 8000.525400e6ca97 yes virbr0-nic > vnet0 > > - But I haven't found any reference in the ansible playbook to this > network creation. > > - The HE VM gets created and I can connect with SSH, so I tried to find > out if the ovirtmgmt network is created via vdsm from the engine > - Looking at the engine.log I found this: > > 2018-02-15 13:49:26,850Z INFO [org.ovirt.engine.core.bll.host.HostConnectivityChecker] > (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Engine managed to > communicate wi > th VDSM agent on host 'ovirt1' with address 'ovirt1' > ('06651b32-4ef8-4b5d-ab2d-c38e84c2d790') > 2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1) > [15c7e33a] EVENT_ID: VLAN_ID_ > MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION(1,119), Failed to configure > management network on host ovirt1. Host ovirt1 has an interface bond0.1005 > for the management netwo > rk configuration with VLAN-ID (1005), which is different from data-center > definition (none). > 2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Exception: > org.ovirt.eng > ine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: > Failed to configure management network > > - So I guess that the engine tried to create the ovirtmgmt bridge on the > host via vdsm, but it failed because "Host ovirt1 has an interface > bond0.1005 for the management netwo > rk configuration with VLAN-ID (1005), which is different from data-center > definition (none)" > - Of course I haven't had the opportunity to setup the management > network's vlan in the datacenter yet, because I'm still trying to deploy > the Hosted Engine > > Is this a supported configuration? Is there a way I can tell the > datacenter that the management network is on vlan 1005? Should I file a bug > report? > Yes, please. > Is there a workaround? > You can pass --noansible and fallback to the previous flow, sorry. > > Salu2! > -- > Miguel Armas > CanaryTek Consultoria y Sistemas SL > http://www.canarytek.com/ > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Thu Feb 15 17:48:18 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Thu, 15 Feb 2018 18:48:18 +0100 Subject: [ovirt-users] hosted-engine 4.2.1-pre setup on a clean node.. In-Reply-To: <1528253021.61107.1518716579592.JavaMail.zimbra@canarytek.com> References: <94882e6e-6ef8-cd72-0565-2e6c127c36fc@lbl.gov> <1528253021.61107.1518716579592.JavaMail.zimbra@canarytek.com> Message-ID: On Thu, Feb 15, 2018 at 6:42 PM, Kuko Armas wrote: > > > Can you please attach vdsm and host-deploy logs? > > This is what I saw in engine.log > > 2018-02-15 13:49:26,850Z INFO [org.ovirt.engine.core.bll.host.HostConnectivityChecker] > (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Engine managed to > communicate wi > th VDSM agent on host 'ovirt1' with address 'ovirt1' > ('06651b32-4ef8-4b5d-ab2d-c38e84c2d790') > 2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-1) > [15c7e33a] EVENT_ID: VLAN_ID_ > MISMATCH_FOR_MANAGEMENT_NETWORK_CONFIGURATION(1,119), Failed to configure > management network on host ovirt1. Host ovirt1 has an interface bond0.1005 > for the management netwo > rk configuration with VLAN-ID (1005), which is different from data-center > definition (none). > 2018-02-15 13:49:30,302Z ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > (EE-ManagedThreadFactory-engine-Thread-1) [15c7e33a] Exception: > org.ovirt.eng > ine.core.bll.network.NetworkConfigurator$NetworkConfiguratorException: > Failed to configure management network > > It seems that ovirtmgmt creation is skipper because inconsistency between > datacenter setup and host setup. Of course we can not configure the vlan > for mgmt network in datacenter because we are still deploying > Yes, you are absolutely right: the setup has to set the VLAN ID at datacenter level before adding the host as the old flow was doing. > > Salu2! > -- > Miguel Armas > CanaryTek Consultoria y Sistemas SL > http://www.canarytek.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Thu Feb 15 18:09:25 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 15 Feb 2018 10:09:25 -0800 Subject: [ovirt-users] Sparsify in 4.2 - where it moved ? In-Reply-To: References: <412264FE-9AF0-4058-8153-EE22D7DF52B7@starlett.lv> Message-ID: On 15 Feb 2018, at 18:25, Benny Zlotnik wrote: Under the 3 dots as can be seen in the attached screenshot Huh, I guess I confused that with sysprep and was looking for it in VM menu Thanks! On Thu, Feb 15, 2018 at 7:07 PM, Michal Skrivanek < michal.skrivanek at redhat.com> wrote: > > > > On 15 Feb 2018, at 14:17, Andrei V wrote: > > > > Hi ! > > > > > > I can?t locate ?Sparsify? disk image command anywhere in oVirt 4.2. > > Where it have been moved ? > > good question:) > Was it lost in GUI redesign? > > > > > > > Thanks > > Andrei > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Thu Feb 15 18:14:44 2018 From: msteele at telvue.com (Mark Steele) Date: Thu, 15 Feb 2018 13:14:44 -0500 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> Message-ID: Michal, Thank you for the response. - there are no qemu processes running - the server has been rebooted several times - the engine has been rebooted several times The issue persists. I'm not sure where to look next. *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Thu, Feb 15, 2018 at 12:10 PM, Michal Skrivanek < michal.skrivanek at redhat.com> wrote: > > > On 15 Feb 2018, at 12:06, Mark Steele wrote: > > I have a host that is currently reporting down with NO VM's on it or > associated with it. However when I attempt to put it into maintenance mode, > I get the following error: > > Host hv-01 cannot change into maintenance mode - not all Vms have been > migrated successfully. Consider manual intervention: stopping/migrating > Vms: (User: admin) > > I am running > oVirt Engine Version: 3.5.0.1-1.el6 > > > that?s a really old version?. > > first confirm there is no running vm on that host (log in there, look for > qemu processes) > if not, it?s likely just engine issue, somewhere it lost track of what?s > actually running there - in that case you could try to restart the host, > restart engine. that should help > > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Thu Feb 15 18:28:20 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 15 Feb 2018 10:28:20 -0800 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: <784ea984-7c14-ad09-1345-aef2ffa00664@endlessnow.com> References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> <784ea984-7c14-ad09-1345-aef2ffa00664@endlessnow.com> Message-ID: On 15 Feb 2018, at 18:34, Christopher Cox wrote: On 02/15/2018 11:10 AM, Michal Skrivanek wrote: ..snippity... with regards to oVirt 3.5 that?s a really old version?. I know I'll catch heat for this, but by "old" you mean like December of 2015? Just trying put things into perspective. Thus it goes with the ancient and decrepit Red Hat Ent. 7.1 days, right? Hehe. It?s not about using it, rather I was referring to the fact that we stopped developing it, stopped fixing even critical security issues. Same for 3.6, and 4.0. I know, I know, FOSS... the only thing worse than running today's code is running yesterday's. Well, there is only a limited amount of resources you can devote to actively maintain branches/releases We typically do that for two versions, covering roughly 1,5 years We still run a 3.5 oVirt in our dev lab, btw. But I would not have set that up (not that I would have recommended oVirt to begin with), preferring 3.4 at the time. I would have waited for 3.6. With that said, 3.5 isn't exactly on the "stable line" to Red Hat Virtualization, that was 3.4 and then 3.6. Some people can't afford major (downtime) upgrades every 3-6 months or so. That?s why we do not really require it and still support 3.6 cluster compat in 4.2, so that does give you longer time to update. And even the cluster upgrades are rolling, we do not require any real downtime other than for rebooting individual VMs and some spare capacity to migrate workloads to during host upgrade. But, arguably, maybe we shouldn't be running oVirt. Maybe it's not designed for "production". I guess oVirt isn't really for production by definition, but many of us are doing so. So... not really a "ding" against oVirt developers, it's just a rapidly moving target with the normal risks that come with that. People just need to understand that. Absolutely. People should understand the difference between a GA and a zstream update 6 months later. Every sw has bugs. But I would argue we do actually have quite a long supported versions, when compared to a $random project. And then yes, we do have a longer support for Red Hat Virtualization, but again in general I would doubt you can find many similar commercial products being _actively_ supported for more than few years And with that said, the fact that many of us are running those ancient decrepit evil versions of oVirt in production today, is actually a testimony to its quality. Good job devs! Thanks! _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From awels at redhat.com Thu Feb 15 18:41:50 2018 From: awels at redhat.com (Alexander Wels) Date: Thu, 15 Feb 2018 13:41:50 -0500 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> Message-ID: <3439461.teDGIRJ3e8@awels> On Thursday, February 15, 2018 1:14:44 PM EST Mark Steele wrote: > Michal, > > Thank you for the response. > > - there are no qemu processes running > - the server has been rebooted several times > - the engine has been rebooted several times > > The issue persists. I'm not sure where to look next. > Have you tried right clicking on the host, and select 'Confirm Host has been rebooted' that is basically telling the engine that the host is fenced, and you should be able to put it into maintenance mode. It will ask confirmation but we know the host has been rebooted and nothing is running. > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com > twitter: http://twitter.com/telvue | facebook: > https://www.facebook.com/telvue > > On Thu, Feb 15, 2018 at 12:10 PM, Michal Skrivanek < > > michal.skrivanek at redhat.com> wrote: > > On 15 Feb 2018, at 12:06, Mark Steele wrote: > > > > I have a host that is currently reporting down with NO VM's on it or > > associated with it. However when I attempt to put it into maintenance > > mode, > > I get the following error: > > > > Host hv-01 cannot change into maintenance mode - not all Vms have been > > migrated successfully. Consider manual intervention: stopping/migrating > > Vms: (User: admin) > > > > I am running > > oVirt Engine Version: 3.5.0.1-1.el6 > > > > > > that?s a really old version?. > > > > first confirm there is no running vm on that host (log in there, look for > > qemu processes) > > if not, it?s likely just engine issue, somewhere it lost track of what?s > > actually running there - in that case you could try to restart the host, > > restart engine. that should help > > > > > > *** > > *Mark Steele* > > CIO / VP Technical Operations | TelVue Corporation > > TelVue - We Share Your Vision > > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > > > J+08054&entry=gmail&source=g> 800.885.8886 x128 <(800)%20885-8886> | > > msteele at telvue.com | http:// www.telvue.com > > twitter: http://twitter.com/telvue | facebook: https://www. > > facebook.com/telvue > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users From jtt77777 at gmail.com Thu Feb 15 19:28:56 2018 From: jtt77777 at gmail.com (John Taylor) Date: Thu, 15 Feb 2018 14:28:56 -0500 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: <016BDB70-B461-44C0-B1FD-30FBC403A96B@redhat.com> References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> <016BDB70-B461-44C0-B1FD-30FBC403A96B@redhat.com> Message-ID: On Thu, Feb 15, 2018 at 11:54 AM, Michal Skrivanek wrote: > > >> On 15 Feb 2018, at 15:58, John Taylor wrote: >> >> Hi Nicolas, >> I had the same problem and it looked like it was because of some older >> vms (I believe from 3.6) that were configured with console with video >> type of CIRRUS and protocol VNC. > > 3.6 had cirrus indeed. That should work. Can you somehow confirm it was really a 3.6 VM and it stopped working in 4.2? The exact steps are important, unfortunately. > I'm pretty sure they were VMs created in 3.6, but I can't say for absolute certain. Sorry. >> Tracing it out it showed that the vm libvirt xml was begin set to >> headless. > > do you at least recall what cluster level version it was when it stopped working? The VM definition should have been changed to VGA when you move the VM from 3.6 cluster to a 4.0+ I upgraded from 4.1.something and I'm pretty sure at the time the cluster level was 4.1, and those same VMs were able to get consoles. Sorry I can't be more help now. I'll see if I have any notes that might help me remember. > >> I tried different settings but the only thing that seemed >> to work was to set them to headless, then reopen config and set them >> to something else. >> >> -John >> >> On Thu, Feb 15, 2018 at 8:48 AM, wrote: >>> Hi, >>> >>> We upgraded one of our infrastructures to 4.2.0 recently and since then some >>> of our machines have the "Console" button greyed-out in the Admin UI, like >>> they were disabled. >>> >>> I changed their compatibility to 4.2 but with no luck, as they're still >>> disabled. >>> >>> Is there a way to know why is that, and how to solve it? >>> >>> I'm attaching a screenshot. >>> >>> Thanks. >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > From khandpur at ualberta.ca Thu Feb 15 19:39:07 2018 From: khandpur at ualberta.ca (Vineet Khandpur) Date: Thu, 15 Feb 2018 12:39:07 -0700 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> <016BDB70-B461-44C0-B1FD-30FBC403A96B@redhat.com> Message-ID: Hello. Just had the same issue @ 4.1, upgraded to 4.2(.1) Updated cluster compatibility, then, data centre compatibility All VMs lost their hardware (NICs (showed attached but unplugged), disks (status changed to disabled) and console) Our solution was simply to connect the NICs, activate the disks, then edit the VM and set Console to headless. Shut down the VM Then before bringing it back up, unchecked headless in the VM We then had to do a Run-Once which failed Then did a normal Run. Console was available, and all hardware came back fine. Didn't have to delete and re-create anything (although had to perform the above on all 70+ production hosts including our main web servers and HA load balancers .. which wasn't fun) ... Hope this helps someone vk On 15 February 2018 at 12:28, John Taylor wrote: > On Thu, Feb 15, 2018 at 11:54 AM, Michal Skrivanek > wrote: > > > > > >> On 15 Feb 2018, at 15:58, John Taylor wrote: > >> > >> Hi Nicolas, > >> I had the same problem and it looked like it was because of some older > >> vms (I believe from 3.6) that were configured with console with video > >> type of CIRRUS and protocol VNC. > > > > 3.6 had cirrus indeed. That should work. Can you somehow confirm it was > really a 3.6 VM and it stopped working in 4.2? The exact steps are > important, unfortunately. > > > > I'm pretty sure they were VMs created in 3.6, but I can't say for > absolute certain. Sorry. > > >> Tracing it out it showed that the vm libvirt xml was begin set to > >> headless. > > > > do you at least recall what cluster level version it was when it stopped > working? The VM definition should have been changed to VGA when you move > the VM from 3.6 cluster to a 4.0+ > > I upgraded from 4.1.something and I'm pretty sure at the time the > cluster level was 4.1, and those same VMs were able to get consoles. > Sorry I can't be more help now. I'll see if I have any notes that > might help me remember. > > > > >> I tried different settings but the only thing that seemed > >> to work was to set them to headless, then reopen config and set them > >> to something else. > >> > >> -John > >> > >> On Thu, Feb 15, 2018 at 8:48 AM, wrote: > >>> Hi, > >>> > >>> We upgraded one of our infrastructures to 4.2.0 recently and since > then some > >>> of our machines have the "Console" button greyed-out in the Admin UI, > like > >>> they were disabled. > >>> > >>> I changed their compatibility to 4.2 but with no luck, as they're still > >>> disabled. > >>> > >>> Is there a way to know why is that, and how to solve it? > >>> > >>> I'm attaching a screenshot. > >>> > >>> Thanks. > >>> _______________________________________________ > >>> Users mailing list > >>> Users at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/users > >>> > >> _______________________________________________ > >> Users mailing list > >> Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > >> > >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Thu Feb 15 21:08:37 2018 From: msteele at telvue.com (Mark Steele) Date: Thu, 15 Feb 2018 16:08:37 -0500 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: <3439461.teDGIRJ3e8@awels> References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> <3439461.teDGIRJ3e8@awels> Message-ID: I have with no joy. Question: Can I restart the HostedEngine with running VM's without negatively impacting them? *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Thu, Feb 15, 2018 at 1:41 PM, Alexander Wels wrote: > On Thursday, February 15, 2018 1:14:44 PM EST Mark Steele wrote: > > Michal, > > > > Thank you for the response. > > > > - there are no qemu processes running > > - the server has been rebooted several times > > - the engine has been rebooted several times > > > > The issue persists. I'm not sure where to look next. > > > > Have you tried right clicking on the host, and select 'Confirm Host has > been > rebooted' that is basically telling the engine that the host is fenced, and > you should be able to put it into maintenance mode. It will ask > confirmation > but we know the host has been rebooted and nothing is running. > > > > > *** > > *Mark Steele* > > CIO / VP Technical Operations | TelVue Corporation > > TelVue - We Share Your Vision > > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > > 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com > > twitter: http://twitter.com/telvue | facebook: > > https://www.facebook.com/telvue > > > > On Thu, Feb 15, 2018 at 12:10 PM, Michal Skrivanek < > > > > michal.skrivanek at redhat.com> wrote: > > > On 15 Feb 2018, at 12:06, Mark Steele wrote: > > > > > > I have a host that is currently reporting down with NO VM's on it or > > > associated with it. However when I attempt to put it into maintenance > > > mode, > > > I get the following error: > > > > > > Host hv-01 cannot change into maintenance mode - not all Vms have been > > > migrated successfully. Consider manual intervention: stopping/migrating > > > Vms: (User: admin) > > > > > > I am running > > > oVirt Engine Version: 3.5.0.1-1.el6 > > > > > > > > > that?s a really old version?. > > > > > > first confirm there is no running vm on that host (log in there, look > for > > > qemu processes) > > > if not, it?s likely just engine issue, somewhere it lost track of > what?s > > > actually running there - in that case you could try to restart the > host, > > > restart engine. that should help > > > > > > > > > *** > > > *Mark Steele* > > > CIO / VP Technical Operations | TelVue Corporation > > > TelVue - We Share Your Vision > > > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > > > 7C+Mt.+Laurel,+N > > > J+08054&entry=gmail&source=g> 800.885.8886 x128 <(800)%20885-8886> | > > > msteele at telvue.com | http:// www.telvue.com > > > twitter: http://twitter.com/telvue | facebook: https://www. > > > facebook.com/telvue > > > _______________________________________________ > > > Users mailing list > > > Users at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kuko at canarytek.com Thu Feb 15 21:19:44 2018 From: kuko at canarytek.com (Kuko Armas) Date: Thu, 15 Feb 2018 21:19:44 +0000 (WET) Subject: [ovirt-users] hosted-engine deploy 4.2.1 fails when ovirtmgmt is defined on vlan subinterface In-Reply-To: References: <575869960.60229.1518706441040.JavaMail.zimbra@canarytek.com> Message-ID: <928962754.61537.1518729584460.JavaMail.zimbra@canarytek.com> >> Is this a supported configuration? Is there a way I can tell the >> datacenter that the management network is on vlan 1005? Should I file a bug >> report? >> > Yes, please. Submitted https://bugzilla.redhat.com/show_bug.cgi?id=1545931 >> Is there a workaround? >> > > You can pass --noansible and fallback to the previous flow, sorry. Yes, I can confirm that the "noansible" version works If the ansible playbook is responsible for deploying the engine inside the HE VM (and you give me some directions where to look), I can try to fix it in my setup and send a patch Salu2! -- Miguel Armas CanaryTek Consultoria y Sistemas SL http://www.canarytek.com/ From rightkicktech at gmail.com Thu Feb 15 21:35:10 2018 From: rightkicktech at gmail.com (Alex K) Date: Thu, 15 Feb 2018 23:35:10 +0200 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> <3439461.teDGIRJ3e8@awels> Message-ID: Yes you can. On Feb 15, 2018 23:09, "Mark Steele" wrote: > I have with no joy. > > Question: Can I restart the HostedEngine with running VM's without > negatively impacting them? > > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 <800%20885%208886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > > On Thu, Feb 15, 2018 at 1:41 PM, Alexander Wels wrote: > >> On Thursday, February 15, 2018 1:14:44 PM EST Mark Steele wrote: >> > Michal, >> > >> > Thank you for the response. >> > >> > - there are no qemu processes running >> > - the server has been rebooted several times >> > - the engine has been rebooted several times >> > >> > The issue persists. I'm not sure where to look next. >> > >> >> Have you tried right clicking on the host, and select 'Confirm Host has >> been >> rebooted' that is basically telling the engine that the host is fenced, >> and >> you should be able to put it into maintenance mode. It will ask >> confirmation >> but we know the host has been rebooted and nothing is running. >> >> > >> > *** >> > *Mark Steele* >> > CIO / VP Technical Operations | TelVue Corporation >> > TelVue - We Share Your Vision >> > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> > 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com >> > twitter: http://twitter.com/telvue | facebook: >> > https://www.facebook.com/telvue >> > >> > On Thu, Feb 15, 2018 at 12:10 PM, Michal Skrivanek < >> > >> > michal.skrivanek at redhat.com> wrote: >> > > On 15 Feb 2018, at 12:06, Mark Steele wrote: >> > > >> > > I have a host that is currently reporting down with NO VM's on it or >> > > associated with it. However when I attempt to put it into maintenance >> > > mode, >> > > I get the following error: >> > > >> > > Host hv-01 cannot change into maintenance mode - not all Vms have been >> > > migrated successfully. Consider manual intervention: >> stopping/migrating >> > > Vms: (User: admin) >> > > >> > > I am running >> > > oVirt Engine Version: 3.5.0.1-1.el6 >> > > >> > > >> > > that?s a really old version?. >> > > >> > > first confirm there is no running vm on that host (log in there, look >> for >> > > qemu processes) >> > > if not, it?s likely just engine issue, somewhere it lost track of >> what?s >> > > actually running there - in that case you could try to restart the >> host, >> > > restart engine. that should help >> > > >> > > >> > > *** >> > > *Mark Steele* >> > > CIO / VP Technical Operations | TelVue Corporation >> > > TelVue - We Share Your Vision >> > > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> > > > +Mt.+Laurel,+N >> > > J+08054&entry=gmail&source=g> 800.885.8886 x128 <(800)%20885-8886> | >> > > msteele at telvue.com | http:// www.telvue.com >> > > twitter: http://twitter.com/telvue | facebook: https://www. >> > > facebook.com/telvue >> > > _______________________________________________ >> > > Users mailing list >> > > Users at ovirt.org >> > > http://lists.ovirt.org/mailman/listinfo/users >> >> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehaas at redhat.com Thu Feb 15 21:59:15 2018 From: ehaas at redhat.com (Edward Haas) Date: Thu, 15 Feb 2018 23:59:15 +0200 Subject: [ovirt-users] Virtual networks in oVirt 4.2 and MTU 1500 In-Reply-To: <671401518642777@web22o.yandex.ru> References: <671401518642777@web22o.yandex.ru> Message-ID: On Wed, Feb 14, 2018 at 11:12 PM, Dmitry Semenov wrote: > I have a not big cluster on oVirt 4.2. > Each node has a bond, that has several vlans in its turn. > I use virtual networks OVN (External Provider -> ovirt-provider-ovn). > > While testing I have noticed that in virtual network MTU must be less > 1500, so my question is may I change something in network or in bond in > order everything in virtual network works correctly with MTU 1500? > What do you mean that it must be less? If you want to have jumbo frames, you need your node to support it (HW support) and the media that connects the nodes to support it as well. Then, you can go to the oVirt Engine, network section and set it (ovirtmgmt?) to whatever mtu you like (9000?). If you have vlan/s defined on the bond which are not controlled by oVirt, then you need to do that manually. > > Below link with my settings: > https://pastebin.com/F7ssCVFa > > -- > Best regards > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > Thanks, Edy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ehaas at redhat.com Thu Feb 15 22:14:44 2018 From: ehaas at redhat.com (Edward Haas) Date: Fri, 16 Feb 2018 00:14:44 +0200 Subject: [ovirt-users] ERROR - some other host already uses IP ###.###.###.### In-Reply-To: References: Message-ID: On Thu, Feb 15, 2018 at 12:36 PM, Mark Steele wrote: > Good morning, > > We had a storage crash early this morning that messed up a couple of our > ovirt hosts. Networking seemed to be the biggest issue. I have decided to > remove the bridge information in /etc/sysconfig/network-scripts and ip the > nics in order to re-import them into my ovirt installation (I have already > removed the hosts). > > One of the NIC's refuses to come up and is generating the following error: > > ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Error, some other > host (0C:C4:7A:5B:11:5C) already uses address ###.###.###.###. > > When I ARP on this server, I do not see that Mac address - and none of my > other hosts are using it either. I'm not sure where to go next other than > completely reinstalling Centos on this server and starting over. > I think it tells you that another node on the network is using the same IP address. If this iface has that static IP defined, perhaps just replace it. > > Ovirt version is oVirt Engine Version: 3.5.0.1-1.el6 > Very (very) old. > > OS version is > > CentOS Linux release 7.4.1708 (Core) > > > Thank you > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matt at khoza.com Thu Feb 15 23:23:30 2018 From: matt at khoza.com (Matt Simonsen) Date: Thu, 15 Feb 2018 15:23:30 -0800 Subject: [ovirt-users] Partition Trouble on oVirt Node Message-ID: <5671281f-edf6-df53-31ff-398808d88763@khoza.com> Hello all, This may not be oVirt specific (but it may be) so thank you in advance for any assistance. I have a system installed with oVirt Node Next 4.1.9 that was installed to /dev/sda I had a seperate RAID Volume /dev/sdb that should not have been used, but now that the operating system is loaded I'm struggling to get the device partitioned. I've tried mkfs.ext4 on the device and also pvcreate, with the errors below. I've also rebooted a couple times and tried to disable multipathd.?? Is multipathd even safe to disable on Node Next? Below are the errors I've received, and thank you again for any tips. [root at node1-g6-h3 ~]# mkfs.ext4 /dev/sdb mke2fs 1.42.9 (28-Dec-2013) /dev/sdb is entire device, not just one partition! Proceed anyway? (y,n) y /dev/sdb is apparently in use by the system; will not make a filesystem here! [root at node1-g6-h3 ~]# gdisk GPT fdisk (gdisk) version 0.8.6 Type device filename, or press to exit: /dev/sdb Caution: invalid main GPT header, but valid backup; regenerating main header from backup! Caution! After loading partitions, the CRC doesn't check out! Warning! Main partition table CRC mismatch! Loaded backup partition table instead of main partition table! Warning! One or more CRCs don't match. You should repair the disk! Partition table scan: ? MBR: not present ? BSD: not present ? APM: not present ? GPT: damaged Found invalid MBR and corrupt GPT. What do you want to do? (Using the GPT MAY permit recovery of GPT data.) ?1 - Use current GPT ?2 - Create blank GPT Your answer: 2 Command (? for help): n Partition number (1-128, default 1): First sector (34-16952264590, default = 2048) or {+-}size{KMGTP}: Last sector (2048-16952264590, default = 16952264590) or {+-}size{KMGTP}: Current type is 'Linux filesystem' Hex code or GUID (L to show codes, Enter = 8300): 8e00 Changed type of partition to 'Linux LVM' Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sdb. The operation has completed successfully. [root at node1-g6-h3 ~]# pvcreate /dev/sdb1 ? Device /dev/sdb1 not found (or ignored by filtering). From Alex at unix1337.com Fri Feb 16 03:50:27 2018 From: Alex at unix1337.com (Alex Bartonek) Date: Thu, 15 Feb 2018 22:50:27 -0500 Subject: [ovirt-users] Unable to connect to the graphic server In-Reply-To: References: Message-ID: -------- Original Message -------- On February 15, 2018 12:52 AM, Yedidyah Bar David wrote: >On Wed, Feb 14, 2018 at 9:20 PM, Alex Bartonek Alex at unix1337.com wrote: >>-------- Original Message -------- >> On February 14, 2018 2:23 AM, Yedidyah Bar David didi at redhat.com wrote: >>>On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek Alex at unix1337.com wrote: >>>>I've built and rebuilt about 4 oVirt servers. Consider myself pretty good >>>> at this. LOL. >>>> So I am setting up a oVirt server for a friend on his r710. CentOS 7, ovirt >>>> 4.2. /etc/hosts has the correct IP and FQDN setup. >>>> When I build a VM and try to open a console session via SPICE I am unable >>>> to connect to the graphic server. I'm connecting from a Windows 10 box. >>>> Using virt-manager to connect. >>>>What happens when you try? >>>Unable to connect to the graphic console is what the error says. Here is the .vv file other than the cert stuff in it: >>[virt-viewer] >> type=spice >> host=192.168.1.83 >> port=-1 >> password= >>Password is valid for 120 seconds. >> >>delete-this-file=1 >> fullscreen=0 >> title=Win_7_32bit:%d >> toggle-fullscreen=shift+f11 >> release-cursor=shift+f12 >> tls-port=5900 >> enable-smartcard=0 >> enable-usb-autoshare=1 >> usb-filter=-1,-1,-1,-1,0 >> tls-ciphers=DEFAULT >>host-subject=O=williams.com,CN=randb.williams.com >>Port 5900 is listening by IP on the server, so that looks correct. I shut the firewall off just in case it was the issue..no go. >> > > Did you verify that you can connect there manually (e.g. with telnet)? > Can you run a sniffer on both sides to make sure traffic passes correctly? > Can you check vdsm/libvirt logs on the host side? Ok.. I must have tanked it on install with the firewall. The firewall is blocking port 5900. This is on CentOS 7. If I flush the rules, it works. From rightkicktech at gmail.com Fri Feb 16 05:59:30 2018 From: rightkicktech at gmail.com (Alex K) Date: Fri, 16 Feb 2018 07:59:30 +0200 Subject: [ovirt-users] Partition Trouble on oVirt Node In-Reply-To: <5671281f-edf6-df53-31ff-398808d88763@khoza.com> References: <5671281f-edf6-df53-31ff-398808d88763@khoza.com> Message-ID: Did you try partprobe and pvscan before pvcreate? On Feb 16, 2018 01:23, "Matt Simonsen" wrote: > Hello all, > > This may not be oVirt specific (but it may be) so thank you in advance for > any assistance. > > I have a system installed with oVirt Node Next 4.1.9 that was installed to > /dev/sda > > I had a seperate RAID Volume /dev/sdb that should not have been used, but > now that the operating system is loaded I'm struggling to get the device > partitioned. > > I've tried mkfs.ext4 on the device and also pvcreate, with the errors > below. I've also rebooted a couple times and tried to disable multipathd. > Is multipathd even safe to disable on Node Next? > > Below are the errors I've received, and thank you again for any tips. > > > [root at node1-g6-h3 ~]# mkfs.ext4 /dev/sdb > mke2fs 1.42.9 (28-Dec-2013) > /dev/sdb is entire device, not just one partition! > Proceed anyway? (y,n) y > /dev/sdb is apparently in use by the system; will not make a filesystem > here! > [root at node1-g6-h3 ~]# gdisk > GPT fdisk (gdisk) version 0.8.6 > > Type device filename, or press to exit: /dev/sdb > Caution: invalid main GPT header, but valid backup; regenerating main > header > from backup! > > Caution! After loading partitions, the CRC doesn't check out! > Warning! Main partition table CRC mismatch! Loaded backup partition table > instead of main partition table! > > Warning! One or more CRCs don't match. You should repair the disk! > > Partition table scan: > MBR: not present > BSD: not present > APM: not present > GPT: damaged > > Found invalid MBR and corrupt GPT. What do you want to do? (Using the > GPT MAY permit recovery of GPT data.) > 1 - Use current GPT > 2 - Create blank GPT > > Your answer: 2 > > Command (? for help): n > Partition number (1-128, default 1): > First sector (34-16952264590, default = 2048) or {+-}size{KMGTP}: > Last sector (2048-16952264590, default = 16952264590) or {+-}size{KMGTP}: > Current type is 'Linux filesystem' > Hex code or GUID (L to show codes, Enter = 8300): 8e00 > Changed type of partition to 'Linux LVM' > > Command (? for help): w > > Final checks complete. About to write GPT data. THIS WILL OVERWRITE > EXISTING > PARTITIONS!! > > Do you want to proceed? (Y/N): y > OK; writing new GUID partition table (GPT) to /dev/sdb. > The operation has completed successfully. > [root at node1-g6-h3 ~]# pvcreate /dev/sdb1 > Device /dev/sdb1 not found (or ignored by filtering). > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.michielsen at gmail.com Fri Feb 16 05:59:28 2018 From: andy.michielsen at gmail.com (Andy Michielsen) Date: Fri, 16 Feb 2018 06:59:28 +0100 Subject: [ovirt-users] Partition Trouble on oVirt Node In-Reply-To: <5671281f-edf6-df53-31ff-398808d88763@khoza.com> References: <5671281f-edf6-df53-31ff-398808d88763@khoza.com> Message-ID: Hello Matt, Can you perform the command cat /etc/fstab and see what partitions you already created during installation ? Did you leave the installation to decide what to do with partitioning by itself or did you did that yourself ? Kind regards. > On 16 Feb 2018, at 00:23, Matt Simonsen wrote: > > Hello all, > > This may not be oVirt specific (but it may be) so thank you in advance for any assistance. > > I have a system installed with oVirt Node Next 4.1.9 that was installed to /dev/sda > > I had a seperate RAID Volume /dev/sdb that should not have been used, but now that the operating system is loaded I'm struggling to get the device partitioned. > > I've tried mkfs.ext4 on the device and also pvcreate, with the errors below. I've also rebooted a couple times and tried to disable multipathd. Is multipathd even safe to disable on Node Next? > > Below are the errors I've received, and thank you again for any tips. > > > [root at node1-g6-h3 ~]# mkfs.ext4 /dev/sdb > mke2fs 1.42.9 (28-Dec-2013) > /dev/sdb is entire device, not just one partition! > Proceed anyway? (y,n) y > /dev/sdb is apparently in use by the system; will not make a filesystem here! > [root at node1-g6-h3 ~]# gdisk > GPT fdisk (gdisk) version 0.8.6 > > Type device filename, or press to exit: /dev/sdb > Caution: invalid main GPT header, but valid backup; regenerating main header > from backup! > > Caution! After loading partitions, the CRC doesn't check out! > Warning! Main partition table CRC mismatch! Loaded backup partition table > instead of main partition table! > > Warning! One or more CRCs don't match. You should repair the disk! > > Partition table scan: > MBR: not present > BSD: not present > APM: not present > GPT: damaged > > Found invalid MBR and corrupt GPT. What do you want to do? (Using the > GPT MAY permit recovery of GPT data.) > 1 - Use current GPT > 2 - Create blank GPT > > Your answer: 2 > > Command (? for help): n > Partition number (1-128, default 1): > First sector (34-16952264590, default = 2048) or {+-}size{KMGTP}: > Last sector (2048-16952264590, default = 16952264590) or {+-}size{KMGTP}: > Current type is 'Linux filesystem' > Hex code or GUID (L to show codes, Enter = 8300): 8e00 > Changed type of partition to 'Linux LVM' > > Command (? for help): w > > Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING > PARTITIONS!! > > Do you want to proceed? (Y/N): y > OK; writing new GUID partition table (GPT) to /dev/sdb. > The operation has completed successfully. > [root at node1-g6-h3 ~]# pvcreate /dev/sdb1 > Device /dev/sdb1 not found (or ignored by filtering). > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From enrico.becchetti at pg.infn.it Fri Feb 16 08:45:16 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Fri, 16 Feb 2018 09:45:16 +0100 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: <53ba550f-53ad-8c09-bbd7-7cce50538f94@pg.infn.it> References: <54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it> <04bfc301-e00b-2532-e2ed-b6c49470a1f2@pg.infn.it> <53ba550f-53ad-8c09-bbd7-7cce50538f94@pg.infn.it> Message-ID: ?? Dear All, Are there tools to remove this task (in attach) ? taskcleaner.sh it's seems doens't work: [root at ovirt-new dbutils]# ./taskcleaner.sh -v -r select exists (select * from information_schema.tables where table_schema = 'public' and table_name = 'command_entities'); ?t SELECT DeleteAllCommands(); ???????????????? 6 [root at ovirt-new dbutils]# ./taskcleaner.sh -v -R select exists (select * from information_schema.tables where table_schema = 'public' and table_name = 'command_entities'); ?t ?This will remove all async_tasks table content!!! Caution, this operation should be used with care. Please contact support prior to running this command Are you sure you want to proceed? [y/n] y TRUNCATE TABLE async_tasks cascade; TRUNCATE TABLE after that I see the same running tasks . Does It make sense ? Thanks Best Regards Enrico Il 14/02/2018 15:53, Enrico Becchetti ha scritto: > Dear All, > old snapsahots seem to be the problem. In fact domain DATA_FC running > in 3.5 had some > lvm snapshot volume. Before deactivate DATA_FC? I didin't remove this > snapshots so when > I attach this volume to new ovirt 4.2 and import all vm at the same > time I also import > all snapshots but now How I can remove them ? Throught ovirt web > interface the remove > tasks running are still hang. Are there any other methods ? > Thank to following this case. > Best Regads > Enrico > > Il 14/02/2018 14:34, Maor Lipchuk ha scritto: >> Seems like all the engine logs are full with the same error. >> From vdsm.log.16.xz?I can see an error which might explain this failure: >> >> 2018-02-12 07:51:16,161+0100 INFO ?(ioprocess communication (40573)) >> [IOProcess] Starting ioprocess (__init__:447) >> 2018-02-12 07:51:16,201+0100 INFO ?(jsonrpc/3) [vdsm.api] FINISH >> mergeSnapshots return=None from=::ffff:10.0.0.46,57032, >> flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568, >> task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (api:52) >> 2018-02-12 07:51:16,275+0100 INFO ?(jsonrpc/3) >> [jsonrpc.JsonRpcServer] RPC call Image.mergeSnapshots succeeded in >> 0.13 seconds (__init__:573) >> 2018-02-12 07:51:16,276+0100 INFO ?(tasks/1) >> [storage.ThreadPool.WorkerThread] START task >> 1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd=> of >, args=None) >> (threadPool:208) >> 2018-02-12 07:51:16,543+0100 INFO ?(tasks/1) [storage.Image] >> sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID= >> imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5 >> ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825 >> successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False >> discard=False (image:1240) >> 2018-02-12 07:51:16,669+0100 ERROR (tasks/1) >> [storage.TaskManager.Task] >> (Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error (task:875) >> Traceback (most recent call last): >> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >> 882, in _run >> ? ? return fn(*args, **kargs) >> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >> 336, in run >> ? ? return self.cmd(*self.argslist, **self.argsdict) >> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >> line 79, in wrapper >> ? ? return method(self, *args, **kwargs) >> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line >> 1853, in mergeSnapshots >> ? ? discard) >> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line >> 1251, in merge >> ? ? srcVol = vols[successor] >> KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd' >> >> Ala, maybe you know if there is any known issue with mergeSnapshots? >> The usecase here are VMs from oVirt 3.5 which got registered to oVirt >> 4.2. >> >> Regards, >> Maor >> >> >> On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti >> > wrote: >> >> ? Hi, >> also you can download them throught these >> links: >> >> https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD >> >> https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb >> >> >> Thanks again !!!! >> >> Best Regards >> Enrico >> >>> Il 13/02/2018 14:52, Maor Lipchuk ha scritto: >>>> >>>> >>>> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk >>>> > wrote: >>>> >>>> >>>> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti >>>> >>> > wrote: >>>> >>>> see the attach files please ... thanks for your >>>> attention !!! >>>> >>>> >>>> >>>> Seems like the engine logs does not contain the entire >>>> process, can you please share older logs since the import >>>> operation? >>>> >>>> >>>> And VDSM logs as well from your host >>>> >>>> Best Regards >>>> Enrico >>>> >>>> >>>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto: >>>>> >>>>> >>>>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti >>>>> >>>> > wrote: >>>>> >>>>> ?Dear All, >>>>> I have been using ovirt for a long time with three >>>>> hypervisors and an external engine running in a >>>>> centos vm . >>>>> >>>>> This three hypervisors have HBAs and access to >>>>> fiber channel storage. Until recently I used >>>>> version 3.5, then I reinstalled everything from >>>>> scratch and now I have 4.2. >>>>> >>>>> Before formatting everything, I detach the storage >>>>> data domani (FC) with the virtual machines and >>>>> reimported it to the new 4.2 and all went well. In >>>>> this domain there were virtual machines with and >>>>> without snapshots. >>>>> >>>>> Now I have two problems. The first is that if I >>>>> try to delete a snapshot the process is not end >>>>> successful and remains hanging and the second >>>>> problem is that >>>>> in one case I lost the virtual machine !!! >>>>> >>>>> >>>>> >>>>> Not sure that I fully understand the scneario.' >>>>> How was the virtual machine got lost if you only tried >>>>> to delete a snapshot? >>>>> >>>>> >>>>> So I need your help to kill the three running >>>>> zombie tasks because with taskcleaner.sh I can't >>>>> do anything and then I need to know how I can >>>>> delete the old snapshots >>>>> made with the 3.5 without losing other data or >>>>> without having new processes that terminate correctly. >>>>> >>>>> If you want some log files please let me know. >>>>> >>>>> >>>>> >>>>> Hi Enrico, >>>>> >>>>> Can you please attach the engine and VDSM logs >>>>> >>>>> >>>>> Thank you so much. >>>>> Best Regards >>>>> Enrico >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>>> >>>> >>>> -- >>>> _______________________________________________________________________ >>>> >>>> Enrico Becchetti Servizio di Calcolo e Reti >>>> >>>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>>> ______________________________________________________________________ >>>> >>>> >>>> >>> >>> -- >>> _______________________________________________________________________ >>> >>> Enrico Becchetti Servizio di Calcolo e Reti >>> >>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>> ______________________________________________________________________ >> >> >> -- >> _______________________________________________________________________ >> >> Enrico Becchetti Servizio di Calcolo e Reti >> >> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >> ______________________________________________________________________ >> >> > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sol.pdf Type: application/pdf Size: 438584 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2269 bytes Desc: Firma crittografica S/MIME URL: From thomas.fecke at eset.de Fri Feb 16 08:40:41 2018 From: thomas.fecke at eset.de (Thomas Fecke) Date: Fri, 16 Feb 2018 08:40:41 +0000 Subject: [ovirt-users] Internal Server Error while add Permission [cli] Message-ID: <077d1469b6bf4f3c886b50c69af94b2f@DR1-XEXCH01-B.eset.corp> Hey dear Community, I work a bit with that ovirt shell. That worked pretty fine but I got some Problems when I try to add Permission: What I want to do: Add a Role to an VM What I did: add permission --parent-vm-name vm1 --user-id user1 --role-id UserVmCreator Error: status: 500 reason: Internal Server Error detail: ErrorInternal Server Error Any other cli command works fine for me. What am I doing wrong? Thank you ! -------------- next part -------------- An HTML attachment was scrubbed... URL: From enrico.becchetti at pg.infn.it Fri Feb 16 08:50:55 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Fri, 16 Feb 2018 09:50:55 +0100 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: References: <54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it> <04bfc301-e00b-2532-e2ed-b6c49470a1f2@pg.infn.it> <53ba550f-53ad-8c09-bbd7-7cce50538f94@pg.infn.it> Message-ID: <0f849bf2-6766-aa64-0395-48ae6d6ade8e@pg.infn.it> After reboot engine virtual machine task disappear but virtual disk is still locked , any ideas to remove that lock ? Thanks again. Enrico l 16/02/2018 09:45, Enrico Becchetti ha scritto: > ?? Dear All, > Are there tools to remove this task (in attach) ? > > taskcleaner.sh it's seems doens't work: > > [root at ovirt-new dbutils]# ./taskcleaner.sh -v -r > select exists (select * from information_schema.tables where > table_schema = 'public' and table_name = 'command_entities'); > ?t > SELECT DeleteAllCommands(); > ???????????????? 6 > [root at ovirt-new dbutils]# ./taskcleaner.sh -v -R > select exists (select * from information_schema.tables where > table_schema = 'public' and table_name = 'command_entities'); > ?t > ?This will remove all async_tasks table content!!! > Caution, this operation should be used with care. Please contact > support prior to running this command > Are you sure you want to proceed? [y/n] > y > TRUNCATE TABLE async_tasks cascade; > TRUNCATE TABLE > > after that I see the same running tasks . Does It make sense ? > > Thanks > Best Regards > Enrico > > > Il 14/02/2018 15:53, Enrico Becchetti ha scritto: >> Dear All, >> old snapsahots seem to be the problem. In fact domain DATA_FC running >> in 3.5 had some >> lvm snapshot volume. Before deactivate DATA_FC? I didin't remove this >> snapshots so when >> I attach this volume to new ovirt 4.2 and import all vm at the same >> time I also import >> all snapshots but now How I can remove them ? Throught ovirt web >> interface the remove >> tasks running are still hang. Are there any other methods ? >> Thank to following this case. >> Best Regads >> Enrico >> >> Il 14/02/2018 14:34, Maor Lipchuk ha scritto: >>> Seems like all the engine logs are full with the same error. >>> From vdsm.log.16.xz?I can see an error which might explain this failure: >>> >>> 2018-02-12 07:51:16,161+0100 INFO ?(ioprocess communication (40573)) >>> [IOProcess] Starting ioprocess (__init__:447) >>> 2018-02-12 07:51:16,201+0100 INFO ?(jsonrpc/3) [vdsm.api] FINISH >>> mergeSnapshots return=None from=::ffff:10.0.0.46,57032, >>> flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568, >>> task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (api:52) >>> 2018-02-12 07:51:16,275+0100 INFO ?(jsonrpc/3) >>> [jsonrpc.JsonRpcServer] RPC call Image.mergeSnapshots succeeded in >>> 0.13 seconds (__init__:573) >>> 2018-02-12 07:51:16,276+0100 INFO ?(tasks/1) >>> [storage.ThreadPool.WorkerThread] START task >>> 1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd=>> of >, args=None) >>> (threadPool:208) >>> 2018-02-12 07:51:16,543+0100 INFO ?(tasks/1) [storage.Image] >>> sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID= >>> imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5 >>> ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825 >>> successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False >>> discard=False (image:1240) >>> 2018-02-12 07:51:16,669+0100 ERROR (tasks/1) >>> [storage.TaskManager.Task] >>> (Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error >>> (task:875) >>> Traceback (most recent call last): >>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >>> 882, in _run >>> ? ? return fn(*args, **kargs) >>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line >>> 336, in run >>> ? ? return self.cmd(*self.argslist, **self.argsdict) >>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", >>> line 79, in wrapper >>> ? ? return method(self, *args, **kwargs) >>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line >>> 1853, in mergeSnapshots >>> ? ? discard) >>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", >>> line 1251, in merge >>> ? ? srcVol = vols[successor] >>> KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd' >>> >>> Ala, maybe you know if there is any known issue with mergeSnapshots? >>> The usecase here are VMs from oVirt 3.5 which got registered to >>> oVirt 4.2. >>> >>> Regards, >>> Maor >>> >>> >>> On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti >>> > >>> wrote: >>> >>> ? Hi, >>> also you can download them throught these >>> links: >>> >>> https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD >>> >>> https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb >>> >>> >>> Thanks again !!!! >>> >>> Best Regards >>> Enrico >>> >>>> Il 13/02/2018 14:52, Maor Lipchuk ha scritto: >>>>> >>>>> >>>>> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk >>>>> > wrote: >>>>> >>>>> >>>>> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti >>>>> >>>> > wrote: >>>>> >>>>> see the attach files please ... thanks for your >>>>> attention !!! >>>>> >>>>> >>>>> >>>>> Seems like the engine logs does not contain the entire >>>>> process, can you please share older logs since the import >>>>> operation? >>>>> >>>>> >>>>> And VDSM logs as well from your host >>>>> >>>>> Best Regards >>>>> Enrico >>>>> >>>>> >>>>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto: >>>>>> >>>>>> >>>>>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti >>>>>> >>>>> > wrote: >>>>>> >>>>>> ?Dear All, >>>>>> I have been using ovirt for a long time with >>>>>> three hypervisors and an external engine running >>>>>> in a centos vm . >>>>>> >>>>>> This three hypervisors have HBAs and access to >>>>>> fiber channel storage. Until recently I used >>>>>> version 3.5, then I reinstalled everything from >>>>>> scratch and now I have 4.2. >>>>>> >>>>>> Before formatting everything, I detach the >>>>>> storage data domani (FC) with the virtual >>>>>> machines and reimported it to the new 4.2 and all >>>>>> went well. In >>>>>> this domain there were virtual machines with and >>>>>> without snapshots. >>>>>> >>>>>> Now I have two problems. The first is that if I >>>>>> try to delete a snapshot the process is not end >>>>>> successful and remains hanging and the second >>>>>> problem is that >>>>>> in one case I lost the virtual machine !!! >>>>>> >>>>>> >>>>>> >>>>>> Not sure that I fully understand the scneario.' >>>>>> How was the virtual machine got lost if you only >>>>>> tried to delete a snapshot? >>>>>> >>>>>> >>>>>> So I need your help to kill the three running >>>>>> zombie tasks because with taskcleaner.sh I can't >>>>>> do anything and then I need to know how I can >>>>>> delete the old snapshots >>>>>> made with the 3.5 without losing other data or >>>>>> without having new processes that terminate >>>>>> correctly. >>>>>> >>>>>> If you want some log files please let me know. >>>>>> >>>>>> >>>>>> >>>>>> Hi Enrico, >>>>>> >>>>>> Can you please attach the engine and VDSM logs >>>>>> >>>>>> >>>>>> Thank you so much. >>>>>> Best Regards >>>>>> Enrico >>>>>> >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> _______________________________________________________________________ >>>>> >>>>> Enrico Becchetti Servizio di Calcolo e Reti >>>>> >>>>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>>>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>>>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>>>> ______________________________________________________________________ >>>>> >>>>> >>>>> >>>> >>>> -- >>>> _______________________________________________________________________ >>>> >>>> Enrico Becchetti Servizio di Calcolo e Reti >>>> >>>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>>> ______________________________________________________________________ >>> >>> >>> -- >>> _______________________________________________________________________ >>> >>> Enrico Becchetti Servizio di Calcolo e Reti >>> >>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>> ______________________________________________________________________ >>> >>> >> >> -- >> _______________________________________________________________________ >> >> Enrico Becchetti Servizio di Calcolo e Reti >> >> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >> ______________________________________________________________________ >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2269 bytes Desc: Firma crittografica S/MIME URL: From thomas.fecke at eset.de Fri Feb 16 08:59:35 2018 From: thomas.fecke at eset.de (Thomas Fecke) Date: Fri, 16 Feb 2018 08:59:35 +0000 Subject: [ovirt-users] Permission on Vm and User portal In-Reply-To: References: Message-ID: <47c975e531094928a075be4125a5ab33@DR1-XEXCH01-B.eset.corp> Hey Guys, Just upgrade to 4.2.2 and still the same Issue. Someone found a Solution? From: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] On Behalf Of carl langlois Sent: Freitag, 26. Januar 2018 15:19 To: giorgio at di.unimi.it Cc: users Subject: Re: [ovirt-users] Permission on Vm and User portal Thanks all for the info .. so it seem that will have to wait 4.2.2. like last comment in this issue is specifying https://github.com/oVirt/ovirt-web-ui/issues/460 Regards Carl On Fri, Jan 26, 2018 at 6:43 AM, Giorgio Biacchi > wrote: It seems it's a bug. There's already another thread here with this subject: Ovirt 4.2 Bug with Permissons on the Vm Portal? I've enabled ovirt 4.2 pre-release repo but the problem is still present in version 4.2.1.3-1.el7.centos Somewhere i read that will be fixed in 4.2.2, I'm waiting... Regards On 01/26/2018 12:13 PM, Donny Davis wrote: I have been trying to get this worked out myself. Firstly someone with a system permission will be able to see things from the system level. I have been adding the permission at the cluster level, but I also just can't seem to figure out the user portal in 4.2. they can either see it all or nothing, even vms they create. I have been using the permissions from this post to no avail. These permissions have worked fine since 3.x days http://lists.ovirt.org/pipermail/users/2015-January/030981.html On Jan 25, 2018 11:57 AM, "carl langlois" >> wrote: Hi all, In 4.1 i was able to assign 1 user to one VM and in the user portal that same user was only seeing this specific VM. But with 4.2 i have trouble with permission. The way i add permission to a specific user is go click on the VM in the admin portal, then go in permission and add the user(active directory user). If i log back with this user on the user portal i do not see the VM that was given the permission. But if i add the same user in the system permission tab in the admin portal and give it the UserRole and log back to the user portal, now he can see all the VM but i only want the user to see is vm not all others ... there is a difference when the is add from the two different place.. is the attribute : when add from the sytem permission it add the (System) in the inherited permission colum, when add from the VM permission tab it does not have that.. Any hints would appreciated. Carl _______________________________________________ Users mailing list Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- gb PGP Key: http://pgp.mit.edu/ Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34 _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsoyer at systea.fr Fri Feb 16 09:03:37 2018 From: fsoyer at systea.fr (fsoyer) Date: Fri, 16 Feb 2018 10:03:37 +0100 Subject: [ovirt-users] =?utf-8?b?Pz09P3V0Zi04P3E/ICBWTXMgd2l0aCBtdWx0aXBs?= =?utf-8?q?e_vdisks_don=27t_migrate?= In-Reply-To: Message-ID: <4663-5a869e80-5-50066700@115233288> Hi Maor, sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem !? https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d -- Cordialement, Frank Soyer?Le Mercredi, F?vrier 14, 2018 11:04 CET, Maor Lipchuk a ?crit: ?Hi Frank,?I already replied on your last email.Can you provide the VDSM logs from the time of the migration failure for both hosts:??ginger.local.systea.fr?and?victor.local.systea.fr?Thanks,Maor?On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !??? So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 16:46:29,705+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ?ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin at internal-authz). 2018-02-12 16:46:31,106+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@77951faf, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} ? For the VM with 2 vdisks we see : 2018-02-12 16:49:06,112+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ?ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin at internal-authz). ... 2018-02-12 16:49:16,453+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ? and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ? Thanks Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. -- Cordialement, Frank Soyer _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ladislav.humenik at 1und1.de Fri Feb 16 09:40:03 2018 From: ladislav.humenik at 1und1.de (Ladislav Humenik) Date: Fri, 16 Feb 2018 10:40:03 +0100 Subject: [ovirt-users] XML error ovirt-4.2.1 release Message-ID: <5ec7babe-f7c4-0d9b-6198-a01312e5abd7@1und1.de> Hello all, we just tested the 4.2.0 release, worked fine so far. Yesterday just updated to the latest 4.2.1 and since then we can not send and receive response; the error has to do with the response to the server: checkContentType(XML_CONTENT_TYPE_RE, "XML", response.getFirstHeader("content-type").getValue()); it seems it is not XML type the error is: ??? ??? throw new Error("Failed to send request", e); Through the web api I can connect and get and see all, but through the SDK it exits. I've tried both 420 and 421 SDKs of ovirt. -- Ladislav Humenik -------------- next part -------------- An HTML attachment was scrubbed... URL: From giorgio at di.unimi.it Fri Feb 16 09:45:54 2018 From: giorgio at di.unimi.it (Giorgio Biacchi) Date: Fri, 16 Feb 2018 10:45:54 +0100 Subject: [ovirt-users] Permission on Vm and User portal In-Reply-To: <47c975e531094928a075be4125a5ab33@DR1-XEXCH01-B.eset.corp> References: <47c975e531094928a075be4125a5ab33@DR1-XEXCH01-B.eset.corp> Message-ID: Hi, check the workaround in the last comment here: https://github.com/oVirt/ovirt-web-ui/issues/460 It seems to work. Regards On 02/16/2018 09:59 AM, Thomas Fecke wrote: > Hey Guys, > > Just upgrade to 4.2.2 and still the same Issue. > > Someone found a Solution? > > *From:*users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] *On Behalf Of > *carl langlois > *Sent:* Freitag, 26. Januar 2018 15:19 > *To:* giorgio at di.unimi.it > *Cc:* users > *Subject:* Re: [ovirt-users] Permission on Vm and User portal > > Thanks all for the info .. so it seem that will have to wait 4.2.2. like last > comment in this issue is specifying > > https://github.com/oVirt/ovirt-web-ui/issues/460 > > Regards > > Carl > > On Fri, Jan 26, 2018 at 6:43 AM, Giorgio Biacchi > wrote: > > It seems it's a bug. There's already another thread here with this subject: > > Ovirt 4.2 Bug with Permissons on the Vm Portal? > > I've enabled ovirt 4.2 pre-release repo but the problem is still present in > version 4.2.1.3-1.el7.centos > > Somewhere i read that will be fixed in 4.2.2, I'm waiting... > > Regards > > On 01/26/2018 12:13 PM, Donny Davis wrote: > > I have been trying to get this worked out myself. > > Firstly someone with a system permission will be able to see things from > the system level. I have been adding the permission at the cluster > level, but I also just can't seem to figure out the user portal in 4.2. > they can either see it all or nothing, even vms they create. > > I have been using the permissions from this post to no avail. > These permissions have worked fine since 3.x days > > http://lists.ovirt.org/pipermail/users/2015-January/030981.html > > > > On Jan 25, 2018 11:57 AM, "carl langlois" >> wrote: > > ? ? Hi all, > > ? ? In 4.1 i was able to assign 1 user to one VM and in the user portal > that > ? ? same user was only seeing this specific VM. But with 4.2 i have > trouble with > ? ? permission. > > ? ? The way i add permission to a specific user is go click on the VM > in the > ? ? admin portal, then go in permission and add the user(active > directory user). > ? ? If i log back with this user on the user portal i do not see the VM > that was > ? ? given the permission. > ? ? But if i add the same user in the system permission tab in the > admin portal > ? ? and give it the UserRole and log back to the user portal, now he > can see all > ? ? the VM but i only want the user to see is vm not all others ... > > ? ? there is a difference when the is add from the two different > place.. is the > ? ? attribute : > ? ? when add from the sytem permission it add the (System) in the inherited > ? ? permission colum, > ? ? when add from the VM permission tab it does not have that.. > > > ? ? Any hints would appreciated. > > ? ? Carl > > ? ? _______________________________________________ > ? ? Users mailing list > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > ? ? > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -- > gb > > PGP Key: http://pgp.mit.edu/ > Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34 > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- gb PGP Key: http://pgp.mit.edu/ Primary key fingerprint: C510 0765 943E EBED A4F2 69D3 16CC DC90 B9CB 0F34 From enrico.becchetti at pg.infn.it Fri Feb 16 10:01:15 2018 From: enrico.becchetti at pg.infn.it (Enrico Becchetti) Date: Fri, 16 Feb 2018 11:01:15 +0100 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: <0f849bf2-6766-aa64-0395-48ae6d6ade8e@pg.infn.it> References: <54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it> <04bfc301-e00b-2532-e2ed-b6c49470a1f2@pg.infn.it> <53ba550f-53ad-8c09-bbd7-7cce50538f94@pg.infn.it> <0f849bf2-6766-aa64-0395-48ae6d6ade8e@pg.infn.it> Message-ID: Hi all, to remove lock I found this script: " /[root at ovirt-new dbutils]#? ./unlock_entity.sh? -t disk 01f9b5f2-9e48-4c24-80e5-dca7f1d4d128// //Caution, this operation may lead to data corruption and should be used with care. Please contact support prior to running this command// //Are you sure you want to proceed? [y/n]// //y// //select fn_db_unlock_disk('01f9b5f2-9e48-4c24-80e5-dca7f1d4d128');// //// //INSERT 0 1// //unlock disk 01f9b5f2-9e48-4c24-80e5-dca7f1d4d128 completed successfully.// " / but virtual disk is still lock. Enrico Il 16/02/2018 09:50, Enrico Becchetti ha scritto: > After reboot engine virtual machine task disappear but virtual disk is > still locked , > any ideas to remove that lock ? > Thanks again. > Enrico > > l 16/02/2018 09:45, Enrico Becchetti ha scritto: >> ?? Dear All, >> Are there tools to remove this task (in attach) ? >> >> taskcleaner.sh it's seems doens't work: >> >> [root at ovirt-new dbutils]# ./taskcleaner.sh -v -r >> select exists (select * from information_schema.tables where >> table_schema = 'public' and table_name = 'command_entities'); >> ?t >> SELECT DeleteAllCommands(); >> ???????????????? 6 >> [root at ovirt-new dbutils]# ./taskcleaner.sh -v -R >> select exists (select * from information_schema.tables where >> table_schema = 'public' and table_name = 'command_entities'); >> ?t >> ?This will remove all async_tasks table content!!! >> Caution, this operation should be used with care. Please contact >> support prior to running this command >> Are you sure you want to proceed? [y/n] >> y >> TRUNCATE TABLE async_tasks cascade; >> TRUNCATE TABLE >> >> after that I see the same running tasks . Does It make sense ? >> >> Thanks >> Best Regards >> Enrico >> >> >> Il 14/02/2018 15:53, Enrico Becchetti ha scritto: >>> Dear All, >>> old snapsahots seem to be the problem. In fact domain DATA_FC >>> running in 3.5 had some >>> lvm snapshot volume. Before deactivate DATA_FC? I didin't remove >>> this snapshots so when >>> I attach this volume to new ovirt 4.2 and import all vm at the same >>> time I also import >>> all snapshots but now How I can remove them ? Throught ovirt web >>> interface the remove >>> tasks running are still hang. Are there any other methods ? >>> Thank to following this case. >>> Best Regads >>> Enrico >>> >>> Il 14/02/2018 14:34, Maor Lipchuk ha scritto: >>>> Seems like all the engine logs are full with the same error. >>>> From vdsm.log.16.xz?I can see an error which might explain this >>>> failure: >>>> >>>> 2018-02-12 07:51:16,161+0100 INFO ?(ioprocess communication >>>> (40573)) [IOProcess] Starting ioprocess (__init__:447) >>>> 2018-02-12 07:51:16,201+0100 INFO ?(jsonrpc/3) [vdsm.api] FINISH >>>> mergeSnapshots return=None from=::ffff:10.0.0.46,57032, >>>> flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568, >>>> task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (api:52) >>>> 2018-02-12 07:51:16,275+0100 INFO ?(jsonrpc/3) >>>> [jsonrpc.JsonRpcServer] RPC call Image.mergeSnapshots succeeded in >>>> 0.13 seconds (__init__:573) >>>> 2018-02-12 07:51:16,276+0100 INFO ?(tasks/1) >>>> [storage.ThreadPool.WorkerThread] START task >>>> 1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd=>>> of >, args=None) >>>> (threadPool:208) >>>> 2018-02-12 07:51:16,543+0100 INFO ?(tasks/1) [storage.Image] >>>> sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID= >>>> imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5 >>>> ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825 >>>> successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False >>>> discard=False (image:1240) >>>> 2018-02-12 07:51:16,669+0100 ERROR (tasks/1) >>>> [storage.TaskManager.Task] >>>> (Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error >>>> (task:875) >>>> Traceback (most recent call last): >>>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", >>>> line 882, in _run >>>> ? ? return fn(*args, **kargs) >>>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", >>>> line 336, in run >>>> ? ? return self.cmd(*self.argslist, **self.argsdict) >>>> ? File >>>> "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line >>>> 79, in wrapper >>>> ? ? return method(self, *args, **kwargs) >>>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line >>>> 1853, in mergeSnapshots >>>> ? ? discard) >>>> ? File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", >>>> line 1251, in merge >>>> ? ? srcVol = vols[successor] >>>> KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd' >>>> >>>> Ala, maybe you know if there is any known issue with mergeSnapshots? >>>> The usecase here are VMs from oVirt 3.5 which got registered to >>>> oVirt 4.2. >>>> >>>> Regards, >>>> Maor >>>> >>>> >>>> On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti >>>> > >>>> wrote: >>>> >>>> ? Hi, >>>> also you can download them throught these >>>> links: >>>> >>>> https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD >>>> >>>> https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb >>>> >>>> >>>> Thanks again !!!! >>>> >>>> Best Regards >>>> Enrico >>>> >>>>> Il 13/02/2018 14:52, Maor Lipchuk ha scritto: >>>>>> >>>>>> >>>>>> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk >>>>>> > wrote: >>>>>> >>>>>> >>>>>> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti >>>>>> >>>>> > wrote: >>>>>> >>>>>> see the attach files please ... thanks for your >>>>>> attention !!! >>>>>> >>>>>> >>>>>> >>>>>> Seems like the engine logs does not contain the entire >>>>>> process, can you please share older logs since the import >>>>>> operation? >>>>>> >>>>>> >>>>>> And VDSM logs as well from your host >>>>>> >>>>>> Best Regards >>>>>> Enrico >>>>>> >>>>>> >>>>>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto: >>>>>>> >>>>>>> >>>>>>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti >>>>>>> >>>>>> > wrote: >>>>>>> >>>>>>> ?Dear All, >>>>>>> I have been using ovirt for a long time with >>>>>>> three hypervisors and an external engine running >>>>>>> in a centos vm . >>>>>>> >>>>>>> This three hypervisors have HBAs and access to >>>>>>> fiber channel storage. Until recently I used >>>>>>> version 3.5, then I reinstalled everything from >>>>>>> scratch and now I have 4.2. >>>>>>> >>>>>>> Before formatting everything, I detach the >>>>>>> storage data domani (FC) with the virtual >>>>>>> machines and reimported it to the new 4.2 and >>>>>>> all went well. In >>>>>>> this domain there were virtual machines with and >>>>>>> without snapshots. >>>>>>> >>>>>>> Now I have two problems. The first is that if I >>>>>>> try to delete a snapshot the process is not end >>>>>>> successful and remains hanging and the second >>>>>>> problem is that >>>>>>> in one case I lost the virtual machine !!! >>>>>>> >>>>>>> >>>>>>> >>>>>>> Not sure that I fully understand the scneario.' >>>>>>> How was the virtual machine got lost if you only >>>>>>> tried to delete a snapshot? >>>>>>> >>>>>>> >>>>>>> So I need your help to kill the three running >>>>>>> zombie tasks because with taskcleaner.sh I can't >>>>>>> do anything and then I need to know how I can >>>>>>> delete the old snapshots >>>>>>> made with the 3.5 without losing other data or >>>>>>> without having new processes that terminate >>>>>>> correctly. >>>>>>> >>>>>>> If you want some log files please let me know. >>>>>>> >>>>>>> >>>>>>> >>>>>>> Hi Enrico, >>>>>>> >>>>>>> Can you please attach the engine and VDSM logs >>>>>>> >>>>>>> >>>>>>> Thank you so much. >>>>>>> Best Regards >>>>>>> Enrico >>>>>>> >>>>>>> >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> _______________________________________________________________________ >>>>>> >>>>>> Enrico Becchetti Servizio di Calcolo e Reti >>>>>> >>>>>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>>>>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>>>>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>>>>> ______________________________________________________________________ >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> _______________________________________________________________________ >>>>> >>>>> Enrico Becchetti Servizio di Calcolo e Reti >>>>> >>>>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>>>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>>>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>>>> ______________________________________________________________________ >>>> >>>> >>>> -- >>>> _______________________________________________________________________ >>>> >>>> Enrico Becchetti Servizio di Calcolo e Reti >>>> >>>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>>> ______________________________________________________________________ >>>> >>>> >>> >>> -- >>> _______________________________________________________________________ >>> >>> Enrico Becchetti Servizio di Calcolo e Reti >>> >>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >>> ______________________________________________________________________ >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> >> >> -- >> _______________________________________________________________________ >> >> Enrico Becchetti Servizio di Calcolo e Reti >> >> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >> Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it >> ______________________________________________________________________ >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- _______________________________________________________________________ Enrico Becchetti Servizio di Calcolo e Reti Istituto Nazionale di Fisica Nucleare - Sezione di Perugia Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) Phone:+39 075 5852777 Mail: Enrico.Becchettipg.infn.it ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2269 bytes Desc: Firma crittografica S/MIME URL: From omachace at redhat.com Fri Feb 16 10:01:31 2018 From: omachace at redhat.com (Ondra Machacek) Date: Fri, 16 Feb 2018 11:01:31 +0100 Subject: [ovirt-users] Internal Server Error while add Permission [cli] In-Reply-To: <077d1469b6bf4f3c886b50c69af94b2f@DR1-XEXCH01-B.eset.corp> References: <077d1469b6bf4f3c886b50c69af94b2f@DR1-XEXCH01-B.eset.corp> Message-ID: Hi, in the /var/log/ovirt-engine/server.log there will be some trace of the exception, right after running that command, can you please share it? Thanks. On 02/16/2018 09:40 AM, Thomas Fecke wrote: > Hey dear Community, > > I work a bit with that ovirt shell. That worked pretty fine but I got > some Problems when I try to add Permission: > > What I want to do: > > Add a Role to an VM > > What I did: > > add permission --parent-vm-name vm1 --user-id user1 --role-id UserVmCreator > > Error: > > status: 500 > > ? reason: Internal Server Error > > ? detail: > > ErrorInternal Server > Error > > Any other cli command works fine for me. What am I doing wrong? Thank you ! > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From nicolas at devels.es Fri Feb 16 10:05:41 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Fri, 16 Feb 2018 10:05:41 +0000 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> <016BDB70-B461-44C0-B1FD-30FBC403A96B@redhat.com> Message-ID: <07a2ee8934ce53b68dac8db942cfb3be@devels.es> Thanks guys, setting the VM to headless and then removing this option seemed to do the trick. Indeed, this VM was created back in 3.6, however it had QXL set in options. Seems that despite this something was not migrated correctly. I think I have more machines in this situation, in case the oVirt team would like to make some additional tests. Regards. El 2018-02-15 19:39, Vineet Khandpur escribi?: > Hello. > > Just had the same issue > > @ 4.1, upgraded to 4.2(.1) > > Updated cluster compatibility, then, data centre compatibility > > All VMs lost their hardware (NICs (showed attached but unplugged), > disks (status changed to disabled) and console) > > Our solution was simply to connect the NICs, activate the disks, then > edit the VM and set Console to headless. > > Shut down the VM > > Then before bringing it back up, unchecked headless in the VM > > We then had to do a Run-Once which failed > > Then did a normal Run. > > Console was available, and all hardware came back fine. > > Didn't have to delete and re-create anything (although had to perform > the above on all 70+ production hosts including our main web servers > and HA load balancers .. which wasn't fun) ... > > Hope this helps someone > > vk > > On 15 February 2018 at 12:28, John Taylor wrote: > >> On Thu, Feb 15, 2018 at 11:54 AM, Michal Skrivanek >> wrote: >>> >>> >>>> On 15 Feb 2018, at 15:58, John Taylor >> wrote: >>>> >>>> Hi Nicolas, >>>> I had the same problem and it looked like it was because of some >> older >>>> vms (I believe from 3.6) that were configured with console with >> video >>>> type of CIRRUS and protocol VNC. >>> >>> 3.6 had cirrus indeed. That should work. Can you somehow confirm >> it was really a 3.6 VM and it stopped working in 4.2? The exact >> steps are important, unfortunately. >>> >> >> I'm pretty sure they were VMs created in 3.6, but I can't say for >> absolute certain.? Sorry. >> >>>> Tracing it out it showed that the vm? libvirt xml was begin set >> to >>>> headless. >>> >>> do you at least recall what cluster level version it was when it >> stopped working? The VM definition should have been changed to VGA >> when you move the VM from 3.6 cluster to a 4.0+ >> >> I upgraded from 4.1.something and I'm pretty sure at the time the >> cluster level was 4.1, and those same VMs were able to get >> consoles. >> Sorry I can't be more help now. I'll see if I have any notes that >> might help me remember. >> >>> >>>> ? I tried different settings but the only thing that seemed >>>> to work was to set them to headless, then reopen config and set >> them >>>> to something else. >>>> >>>> -John >>>> >>>> On Thu, Feb 15, 2018 at 8:48 AM,? wrote: >>>>> Hi, >>>>> >>>>> We upgraded one of our infrastructures to 4.2.0 recently and >> since then some >>>>> of our machines have the "Console" button greyed-out in the >> Admin UI, like >>>>> they were disabled. >>>>> >>>>> I changed their compatibility to 4.2 but with no luck, as >> they're still >>>>> disabled. >>>>> >>>>> Is there a way to know why is that, and how to solve it? >>>>> >>>>> I'm attaching a screenshot. >>>>> >>>>> Thanks. >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users [1] >>>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users [1] >>>> >>>> >>> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users [1] > > > > Links: > ------ > [1] http://lists.ovirt.org/mailman/listinfo/users > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From msteele at telvue.com Fri Feb 16 10:23:54 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 05:23:54 -0500 Subject: [ovirt-users] ERROR - some other host already uses IP ###.###.###.### In-Reply-To: References: Message-ID: Thank you Edward, I actually did find the offending IP - I was using a small range that was already in use but not documented. Best regards, *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Thu, Feb 15, 2018 at 5:14 PM, Edward Haas wrote: > > > On Thu, Feb 15, 2018 at 12:36 PM, Mark Steele wrote: > >> Good morning, >> >> We had a storage crash early this morning that messed up a couple of our >> ovirt hosts. Networking seemed to be the biggest issue. I have decided to >> remove the bridge information in /etc/sysconfig/network-scripts and ip the >> nics in order to re-import them into my ovirt installation (I have already >> removed the hosts). >> >> One of the NIC's refuses to come up and is generating the following error: >> >> ERROR : [/etc/sysconfig/network-scripts/ifup-eth] Error, some other >> host (0C:C4:7A:5B:11:5C) already uses address ###.###.###.###. >> >> When I ARP on this server, I do not see that Mac address - and none of my >> other hosts are using it either. I'm not sure where to go next other than >> completely reinstalling Centos on this server and starting over. >> > > I think it tells you that another node on the network is using the same IP > address. If this iface has that static IP defined, perhaps just replace it. > > >> >> Ovirt version is oVirt Engine Version: 3.5.0.1-1.el6 >> > > Very (very) old. > > >> >> OS version is >> >> CentOS Linux release 7.4.1708 (Core) >> >> > >> Thank you >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> >> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.fecke at eset.de Fri Feb 16 10:37:45 2018 From: thomas.fecke at eset.de (Thomas Fecke) Date: Fri, 16 Feb 2018 10:37:45 +0000 Subject: [ovirt-users] VM Portal - ADD Nic Message-ID: Hey Guys, We got about 50 Users and 50 VLANS. Every User has his own Vlan. With 4.1 they could login in that User Portal. Select an Template or create a new VM. Add a Disk and connect to there Nic. I see that is no option to add a Disk anymore with 4.2 -> okay that's fine for me So they just can use Templates. But, there is now option to add the VM to a nic. So I guess the Template nic is being used. But our Templates don't got a nic because the user has his own networks. That mean I need to add about XX more Templates with every nic in it? Oh common :) No way to add a nic via VM Portal? That really make the VM Portal unusable for us We can?t be the only one using Templates like that. Now every VM set up in VM Portal is in one network, that's not good or do I miss something? -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabrice.bacchella at orange.fr Fri Feb 16 11:04:12 2018 From: fabrice.bacchella at orange.fr (Fabrice Bacchella) Date: Fri, 16 Feb 2018 12:04:12 +0100 Subject: [ovirt-users] database restoration Message-ID: <2E4BC6F0-E5D1-44D4-B21D-B890C15C3FFC@orange.fr> I'm running a restoration test and getting the following log generated by engine-backup --mode=restore: pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 4274; 0 0 COMMENT EXTENSION plpgsql pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension plpgsql Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language'; pg_restore: WARNING: no privileges could be revoked for "public" pg_restore: WARNING: no privileges could be revoked for "public" pg_restore: WARNING: no privileges were granted for "public" pg_restore: WARNING: no privileges were granted for "public" WARNING: errors ignored on restore: 1 Do I need to worry, as this error is ignored ? From msteele at telvue.com Fri Feb 16 13:45:36 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 08:45:36 -0500 Subject: [ovirt-users] Requirements for basic host Message-ID: Hello again, I'm building a new host for my cluster and have a quick question about required software for joining the host to my cluster. In my notes from a previous colleague, I am instructed to do the following: yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm yum install ovirt-hosted-engine-setup hosted-engine --deploy We already have a HostedEngine running on another server in the cluster - so do I need to install ovirt-hosted-engine-setup and then deploy it for this server to join the cluster and operate properly? As always - thank you for your time. *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Fri Feb 16 13:54:01 2018 From: andreil1 at starlett.lv (Andrei V) Date: Fri, 16 Feb 2018 15:54:01 +0200 Subject: [ovirt-users] Requirements for basic host In-Reply-To: References: Message-ID: Nope, you don?t need additional engine. > On 16 Feb 2018, at 15:45, Mark Steele wrote: > > Hello again, > > I'm building a new host for my cluster and have a quick question about required software for joining the host to my cluster. > > In my notes from a previous colleague, I am instructed to do the following: > > yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm > yum install ovirt-hosted-engine-setup > hosted-engine --deploy > We already have a HostedEngine running on another server in the cluster - so do I need to install ovirt-hosted-engine-setup and then deploy it for this server to join the cluster and operate properly? > > As always - thank you for your time. > > *** > Mark Steele > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Fri Feb 16 13:55:36 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 08:55:36 -0500 Subject: [ovirt-users] Requirements for basic host In-Reply-To: References: Message-ID: Thank you! Now if I can just get past this failing to install into the cluster on error code 1! Seems to be some sort of authentication issue between the hosted engine and the new server just at the end I'll post some more information in a new email *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Fri, Feb 16, 2018 at 8:54 AM, Andrei V wrote: > > Nope, you don?t need additional engine. > > > On 16 Feb 2018, at 15:45, Mark Steele wrote: > > Hello again, > > I'm building a new host for my cluster and have a quick question about > required software for joining the host to my cluster. > > In my notes from a previous colleague, I am instructed to do the following: > > yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release35.rpm > yum install ovirt-hosted-engine-setup > hosted-engine --deploy > > We already have a HostedEngine running on another server in the cluster - > so do I need to install ovirt-hosted-engine-setup and then deploy it for > this server to join the cluster and operate properly? > > As always - thank you for your time. > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Fri Feb 16 16:45:59 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 11:45:59 -0500 Subject: [ovirt-users] Unable to add Hosts to Cluster Message-ID: Hello all, We recently had a network event where we lost access to our storage for a period of time. The Cluster basically shut down all our VM's and in the process we had three HV's that went offline and would not communicate properly with the cluster. We have since completely reinstalled CentOS on the hosts and attempted to install them into the cluster with no joy. We've gotten to the point where we generally get an error message in the web gui: Stage: Misc Configuration Host hv-ausa-02 installation failed. Command returned failure code 1 during SSH session 'root at 10.1.90.154'. the following is what we are seeing in the messages log: Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15231: error : virNetSASLSessionListMechanisms:390 : internal error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in server.c near line 1757) Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15231: error : remoteDispatchAuthSaslInit:3411 : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15226: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: 15233: error : virNetSASLSessionListMechanisms:390 : internal error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in server.c near line 1757) Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: 15233: error : remoteDispatchAuthSaslInit:3411 : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: 15226: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call last): Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line 219, in main Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return tool_command[cmd]["command"](*args) Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", line 83, in upgrade_networks Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in networks Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 159, in get Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 95, in _open_qemu_connection Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, timeout=10, sleep=0.2) Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise libvirtError('virConnectOpenAuth() failed') Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control process exited, code=exited status=1 Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop Server Manager network restoration. Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual Desktop Server Manager. Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed with result 'dependency'. Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered failed state. Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. Can someone point me in the right direction to resolve this - it seems to be a SASL issue perhaps? *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Fri Feb 16 19:08:03 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 14:08:03 -0500 Subject: [ovirt-users] Username / password for ovirt-shell Message-ID: Hello, I'm not the original system architect of our Cluster and I'm not able to locate any documentation regarding the username and password for our ovirt-shell CLI. Is there a config file on the HostedEngine that would point me in the right direction? Best regards, *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Fri Feb 16 19:17:59 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 14:17:59 -0500 Subject: [ovirt-users] Username / password for ovirt-shell In-Reply-To: References: Message-ID: Please disregard - there was a .ovirtshellrc file and I was able to figure out the credentials *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Fri, Feb 16, 2018 at 2:08 PM, Mark Steele wrote: > Hello, > > I'm not the original system architect of our Cluster and I'm not able to > locate any documentation regarding the username and password for our > ovirt-shell CLI. > > Is there a config file on the HostedEngine that would point me in the > right direction? > > Best regards, > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Fri Feb 16 19:26:02 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 14:26:02 -0500 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> <3439461.teDGIRJ3e8@awels> Message-ID: Using the ovirt-shell, I find that there are no vms assigned to this host: [oVirt shell (connected)]# list vms --query "host=hv-01" So I'm now looking to see where the host is reporting back to the engine that it has migrations in progress. Anyone know where to look for that? Thanks! *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Thu, Feb 15, 2018 at 4:35 PM, Alex K wrote: > Yes you can. > > On Feb 15, 2018 23:09, "Mark Steele" wrote: > >> I have with no joy. >> >> Question: Can I restart the HostedEngine with running VM's without >> negatively impacting them? >> >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> >> 800.885.8886 x128 <800%20885%208886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> On Thu, Feb 15, 2018 at 1:41 PM, Alexander Wels wrote: >> >>> On Thursday, February 15, 2018 1:14:44 PM EST Mark Steele wrote: >>> > Michal, >>> > >>> > Thank you for the response. >>> > >>> > - there are no qemu processes running >>> > - the server has been rebooted several times >>> > - the engine has been rebooted several times >>> > >>> > The issue persists. I'm not sure where to look next. >>> > >>> >>> Have you tried right clicking on the host, and select 'Confirm Host has >>> been >>> rebooted' that is basically telling the engine that the host is fenced, >>> and >>> you should be able to put it into maintenance mode. It will ask >>> confirmation >>> but we know the host has been rebooted and nothing is running. >>> >>> > >>> > *** >>> > *Mark Steele* >>> > CIO / VP Technical Operations | TelVue Corporation >>> > TelVue - We Share Your Vision >>> > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>> >>> > 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com >>> > twitter: http://twitter.com/telvue | facebook: >>> > https://www.facebook.com/telvue >>> > >>> > On Thu, Feb 15, 2018 at 12:10 PM, Michal Skrivanek < >>> > >>> > michal.skrivanek at redhat.com> wrote: >>> > > On 15 Feb 2018, at 12:06, Mark Steele wrote: >>> > > >>> > > I have a host that is currently reporting down with NO VM's on it or >>> > > associated with it. However when I attempt to put it into maintenance >>> > > mode, >>> > > I get the following error: >>> > > >>> > > Host hv-01 cannot change into maintenance mode - not all Vms have >>> been >>> > > migrated successfully. Consider manual intervention: >>> stopping/migrating >>> > > Vms: (User: admin) >>> > > >>> > > I am running >>> > > oVirt Engine Version: 3.5.0.1-1.el6 >>> > > >>> > > >>> > > that?s a really old version?. >>> > > >>> > > first confirm there is no running vm on that host (log in there, >>> look for >>> > > qemu processes) >>> > > if not, it?s likely just engine issue, somewhere it lost track of >>> what?s >>> > > actually running there - in that case you could try to restart the >>> host, >>> > > restart engine. that should help >>> > > >>> > > >>> > > *** >>> > > *Mark Steele* >>> > > CIO / VP Technical Operations | TelVue Corporation >>> > > TelVue - We Share Your Vision >>> > > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>> >>> > > >> +Mt.+Laurel,+N >>> > > J+08054&entry=gmail&source=g> 800.885.8886 x128 <(800)%20885-8886> | >>> > > msteele at telvue.com | http:// www.telvue.com >>> > > twitter: http://twitter.com/telvue | facebook: https://www. >>> > > facebook.com/telvue >>> > > _______________________________________________ >>> > > Users mailing list >>> > > Users at ovirt.org >>> > > http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Feb 16 20:30:01 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 16 Feb 2018 22:30:01 +0200 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: <784ea984-7c14-ad09-1345-aef2ffa00664@endlessnow.com> References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> <784ea984-7c14-ad09-1345-aef2ffa00664@endlessnow.com> Message-ID: On Feb 15, 2018 7:35 PM, "Christopher Cox" wrote: On 02/15/2018 11:10 AM, Michal Skrivanek wrote: ..snippity... with regards to oVirt 3.5 > that?s a really old version?. > I know I'll catch heat for this, but by "old" you mean like December of 2015? Just trying put things into perspective. Thus it goes with the ancient and decrepit Red Hat Ent. 7.1 days, right? I know, I know, FOSS... the only thing worse than running today's code is running yesterday's. We still run a 3.5 oVirt in our dev lab, btw. But I would not have set that up (not that I would have recommended oVirt to begin with), preferring 3.4 at the time. I would have waited for 3.6. With that said, 3.5 isn't exactly on the "stable line" to Red Hat Virtualization, that was 3.4 and then 3.6. Red Hat doesn't support 3.x anymore, unless its 3.6 with specific subscription that extends its support. Some people can't afford major (downtime) upgrades every 3-6 months or so. But, arguably, maybe we shouldn't be running oVirt. Maybe it's not designed for "production". 3.4,5,6 are minor releases of 3.x. The same way that 4.1 and 4.2 are minor releases of 4.x. I agree that with lots of changing landscape (for example, the move from EL6 to EL7) and with the number of features introduced, they don't seem that minor. But there's an ongoing effort to both keep backwards compatibility as well continously improve quality - which regretfully, requires updating from time to time. I guess oVirt isn't really for production by definition, but many of us are doing so. So... not really a "ding" against oVirt developers, it's just a rapidly moving target with the normal risks that come with that. People just need to understand that. And with that said, the fact that many of us are running those ancient decrepit evil versions of oVirt in production today, is actually a testimony to its quality. Good job devs! Or a warning sign that upgrade is not yet easy as it should be. I believe we've improved the experience and quality of the upgrade flow over time, but we can certainly do a better job. I also think there are two additional factors : 1. Don't fix what ain't broken - it works, why bother? Not much the oVirt community can do here. 2. Newer versions do not provide enough incentive to upgrade. This is a tougher one - I believe they do, both in terms of quality as well as new features that bring value to different use-cases. However, we may not be doing enough 'marketing' work around them, or they are not documented well enough, etc. Y. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Feb 16 20:31:50 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 16 Feb 2018 22:31:50 +0200 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: Hello all, We recently had a network event where we lost access to our storage for a period of time. The Cluster basically shut down all our VM's and in the process we had three HV's that went offline and would not communicate properly with the cluster. We have since completely reinstalled CentOS on the hosts and attempted to install them into the cluster with no joy. We've gotten to the point where we generally get an error message in the web gui: Which EL release and which oVirt release are you using? My guess would be latest EL, with an older oVirt? Y. Stage: Misc Configuration Host hv-ausa-02 installation failed. Command returned failure code 1 during SSH session 'root at 10.1.90.154'. the following is what we are seeing in the messages log: Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15231: error : virNetSASLSessionListMechanisms:390 : internal error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in server.c near line 1757) Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15231: error : remoteDispatchAuthSaslInit:3411 : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15226: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: 15233: error : virNetSASLSessionListMechanisms:390 : internal error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in server.c near line 1757) Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: 15233: error : remoteDispatchAuthSaslInit:3411 : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: 15226: error : virNetSocketReadWire:1808 : End of file while reading data: Input/output error Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call last): Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line 219, in main Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return tool_command[cmd]["command"](* args) Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site- packages/vdsm/tool/upgrade_300_networks.py", line 83, in upgrade_networks Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in networks Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site- packages/vdsm/libvirtconnection.py", line 159, in get Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site- packages/vdsm/libvirtconnection.py", line 95, in _open_qemu_connection Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, timeout=10, sleep=0.2) Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise libvirtError('virConnectOpenAuth() failed') Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication failed: authentication failed Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control process exited, code=exited status=1 Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop Server Manager network restoration. Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual Desktop Server Manager. Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed with result 'dependency'. Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered failed state. Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. Can someone point me in the right direction to resolve this - it seems to be a SASL issue perhaps? *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www. facebook.com/telvue _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Fri Feb 16 21:14:05 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 16:14:05 -0500 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine Version: 3.5.0.1-1.el6 We have four other hosts that are running this same configuration already. I took one host out of the cluster (forcefully) that was working and now it will not add back in either - throwing the same SASL error. We are looking at downgrading libvirt as I've seen that somewhere else - is there another version of RH I should be trying? I have a host I can put it on. *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: > > > On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: > > Hello all, > > We recently had a network event where we lost access to our storage for a > period of time. The Cluster basically shut down all our VM's and in the > process we had three HV's that went offline and would not communicate > properly with the cluster. > > We have since completely reinstalled CentOS on the hosts and attempted to > install them into the cluster with no joy. We've gotten to the point where > we generally get an error message in the web gui: > > > Which EL release and which oVirt release are you using? My guess would be > latest EL, with an older oVirt? > Y. > > > Stage: Misc Configuration > Host hv-ausa-02 installation failed. Command returned failure code 1 > during SSH session 'root at 10.1.90.154'. > > the following is what we are seeing in the messages log: > > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : > authentication failed: authentication failed > Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15231: > error : virNetSASLSessionListMechanisms:390 : internal error: cannot list > SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in > server.c near line 1757) > Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15231: > error : remoteDispatchAuthSaslInit:3411 : authentication failed: > authentication failed > Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15226: > error : virNetSocketReadWire:1808 : End of file while reading data: > Input/output error > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : > authentication failed: authentication failed > Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: 15233: > error : virNetSASLSessionListMechanisms:390 : internal error: cannot list > SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error -4 in > server.c near line 1757) > Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: 15233: > error : remoteDispatchAuthSaslInit:3411 : authentication failed: > authentication failed > Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: 15226: > error : virNetSocketReadWire:1808 : End of file while reading data: > Input/output error > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : > authentication failed: authentication failed > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call last): > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line 219, > in main > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return tool_command[cmd]["command"](* > args) > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa > ges/vdsm/tool/upgrade_300_networks.py", line 83, in upgrade_networks > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", > line 112, in networks > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa > ges/vdsm/libvirtconnection.py", line 159, in get > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa > ges/vdsm/libvirtconnection.py", line 95, in _open_qemu_connection > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, > timeout=10, sleep=0.2) > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/utils.py", > line 1108, in retry > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib64/python2.7/site-packages/libvirt.py", > line 105, in openAuth > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise > libvirtError('virConnectOpenAuth() failed') > Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication failed: > authentication failed > Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control process > exited, code=exited status=1 > Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop Server > Manager network restoration. > Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual Desktop > Server Manager. > Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed with > result 'dependency'. > Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered > failed state. > Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. > Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. > Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. > Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. > Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. > > Can someone point me in the right direction to resolve this - it seems to > be a SASL issue perhaps? > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www.facebook > .com/telvue > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Fri Feb 16 21:18:41 2018 From: msteele at telvue.com (Mark Steele) Date: Fri, 16 Feb 2018 16:18:41 -0500 Subject: [ovirt-users] Unable to put Host into Maintenance mode In-Reply-To: References: <8CA42C48-9762-4A2F-826B-1915011887CC@redhat.com> <784ea984-7c14-ad09-1345-aef2ffa00664@endlessnow.com> Message-ID: So - to get this back on track - I was able to remove the host entirely from HostedEngine using ovirt-shell - but now cannot add it back for the same reason as not being able to add ANY hosts to this cluster (there is another email thread on this): *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Fri, Feb 16, 2018 at 3:30 PM, Yaniv Kaul wrote: > > > On Feb 15, 2018 7:35 PM, "Christopher Cox" wrote: > > On 02/15/2018 11:10 AM, Michal Skrivanek wrote: > ..snippity... with regards to oVirt 3.5 > > >> that?s a really old version?. >> > > I know I'll catch heat for this, but by "old" you mean like December of > 2015? Just trying put things into perspective. Thus it goes with the > ancient and decrepit Red Hat Ent. 7.1 days, right? > > I know, I know, FOSS... the only thing worse than running today's code is > running yesterday's. > > We still run a 3.5 oVirt in our dev lab, btw. But I would not have set > that up (not that I would have recommended oVirt to begin with), preferring > 3.4 at the time. I would have waited for 3.6. > > With that said, 3.5 isn't exactly on the "stable line" to Red Hat > Virtualization, that was 3.4 and then 3.6. > > > Red Hat doesn't support 3.x anymore, unless its 3.6 with specific > subscription that extends its support. > > > Some people can't afford major (downtime) upgrades every 3-6 months or > so. But, arguably, maybe we shouldn't be running oVirt. Maybe it's not > designed for "production". > > > 3.4,5,6 are minor releases of 3.x. > The same way that 4.1 and 4.2 are minor releases of 4.x. > I agree that with lots of changing landscape (for example, the move from > EL6 to EL7) and with the number of features introduced, they don't seem > that minor. But there's an ongoing effort to both keep backwards > compatibility as well continously improve quality - which regretfully, > requires updating from time to time. > > > I guess oVirt isn't really for production by definition, but many of us > are doing so. > > So... not really a "ding" against oVirt developers, it's just a rapidly > moving target with the normal risks that come with that. People just need > to understand that. > > And with that said, the fact that many of us are running those ancient > decrepit evil versions of oVirt in production today, is actually a > testimony to its quality. Good job devs! > > > Or a warning sign that upgrade is not yet easy as it should be. I believe > we've improved the experience and quality of the upgrade flow over time, > but we can certainly do a better job. > > I also think there are two additional factors : > 1. Don't fix what ain't broken - it works, why bother? Not much the oVirt > community can do here. > 2. Newer versions do not provide enough incentive to upgrade. This is a > tougher one - I believe they do, both in terms of quality as well as new > features that bring value to different use-cases. However, we may not be > doing enough 'marketing' work around them, or they are not documented well > enough, etc. > Y. > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From plord at intricatenetworks.com Fri Feb 16 23:31:10 2018 From: plord at intricatenetworks.com (Zip) Date: Fri, 16 Feb 2018 17:31:10 -0600 Subject: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? Message-ID: Are there any updated docs for the WebUI Plugins API? I have found the following which all appear to be old and no longer working? https://www.ovirt.org/documentation/admin-guide/appe-oVirt_User_Interface_Pl ugins/ https://www.ovirt.org/develop/release-management/features/ux/uiplugins/ http://resources.ovirt.org/old-site-files/UI_Plugins_at_oVirt_Workshop_Sunny vale_2013.pdf Thanks Zip -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Vrgotic at activevideo.com Sat Feb 17 07:11:04 2018 From: M.Vrgotic at activevideo.com (Vrgotic, Marko) Date: Sat, 17 Feb 2018 07:11:04 +0000 Subject: [ovirt-users] 4.2 VM Portal -Create- VM section issue In-Reply-To: References: Message-ID: <06DC6162-4F48-4F6A-82F4-035A12715C64@ictv.com> Dear Tomas, In addition to previous email, find attached javascript console output from browser: Kind regards, Marko Vrgotic From: "Vrgotic, Marko" Date: Thursday, 25 January 2018 at 13:52 To: Tomas Jelinek Cc: users Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue Hi Tomas, Thank you. VM does get created, so I think permission are in order: I will attach them in next reply. As soon as possible I will attach all logs related. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo From: "Vrgotic, Marko" Date: Thursday, 25 January 2018 at 13:18 To: Tomas Jelinek Cc: users , "users-request at ovirt.org" Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue Hi Tomas, Thank you. VM does get created, so I think permission are in order: I will attach them in next reply. As soon as possible I will attach all logs related. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo From: Tomas Jelinek Date: Thursday, 25 January 2018 at 13:03 To: "Vrgotic, Marko" Cc: users , "users-request at ovirt.org" Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue On 24 Jan 2018 5:17 p.m., "Vrgotic, Marko" > wrote: Dear oVirt, After setting all parameters for new VM and clicking on ?Create? button, no progress status or that action is accepted is seen from webui. In addition, when closing the add VM section, I am asked if I am sure, due to changes made. Is this expected behaviour? Can something be done about? no, it is not. can you please provide the logs from the javascript console in browser? can you please make sure the user has permissions to create a vm? Kindly awaiting your reply. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: javas_browser_console_output Type: application/octet-stream Size: 22805 bytes Desc: javas_browser_console_output URL: From M.Vrgotic at activevideo.com Sat Feb 17 07:22:51 2018 From: M.Vrgotic at activevideo.com (Vrgotic, Marko) Date: Sat, 17 Feb 2018 07:22:51 +0000 Subject: [ovirt-users] How to protect SHE VM from being deleted in following setup Message-ID: Dear oVirt community, I have SHE on the Gluster (not managed by SHE). Due to limitations of VM Portal, I have given couple of trusted Users, trimmed down Admin access, so that they can create VMs. However, this does make me bit worried, since the SHE VM could get deleted as any other VM in the pool. The SHE VM has its own storage pool, but it?s part of same Hypervisor Cluster (limitations of available HW), therefore my Users can see it and accidentally delete it ? it can happen! QUESTION: Any advices that could help me protect SHE VM from being deleted? Any suggestions, ideas are highly welcome. Thank you. Best regards, Marko Vrgotic -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Sat Feb 17 07:32:49 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Sat, 17 Feb 2018 09:32:49 +0200 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele wrote: > We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine > Version: 3.5.0.1-1.el6 > You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , which is a result of a default change of libvirt and was fixed in later versions of oVirt than the one you are using. See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, you can probably configure it manually. Y. > > We have four other hosts that are running this same configuration already. > I took one host out of the cluster (forcefully) that was working and now it > will not add back in either - throwing the same SASL error. > > We are looking at downgrading libvirt as I've seen that somewhere else - > is there another version of RH I should be trying? I have a host I can put > it on. > > > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > > On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: > >> >> >> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >> >> Hello all, >> >> We recently had a network event where we lost access to our storage for a >> period of time. The Cluster basically shut down all our VM's and in the >> process we had three HV's that went offline and would not communicate >> properly with the cluster. >> >> We have since completely reinstalled CentOS on the hosts and attempted to >> install them into the cluster with no joy. We've gotten to the point where >> we generally get an error message in the web gui: >> >> >> Which EL release and which oVirt release are you using? My guess would be >> latest EL, with an older oVirt? >> Y. >> >> >> Stage: Misc Configuration >> Host hv-ausa-02 installation failed. Command returned failure code 1 >> during SSH session 'root at 10.1.90.154'. >> >> the following is what we are seeing in the messages log: >> >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >> authentication failed: authentication failed >> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15231: >> error : virNetSASLSessionListMechanisms:390 : internal error: cannot >> list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error >> -4 in server.c near line 1757) >> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15231: >> error : remoteDispatchAuthSaslInit:3411 : authentication failed: >> authentication failed >> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: 15226: >> error : virNetSocketReadWire:1808 : End of file while reading data: >> Input/output error >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >> authentication failed: authentication failed >> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: 15233: >> error : virNetSASLSessionListMechanisms:390 : internal error: cannot >> list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal Error >> -4 in server.c near line 1757) >> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: 15233: >> error : remoteDispatchAuthSaslInit:3411 : authentication failed: >> authentication failed >> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: 15226: >> error : virNetSocketReadWire:1808 : End of file while reading data: >> Input/output error >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >> authentication failed: authentication failed >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call last): >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line >> 219, in main >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >> tool_command[cmd]["command"](*args) >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >> ges/vdsm/tool/upgrade_300_networks.py", line 83, in upgrade_networks >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", >> line 112, in networks >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >> ges/vdsm/libvirtconnection.py", line 159, in get >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >> ges/vdsm/libvirtconnection.py", line 95, in _open_qemu_connection >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, >> timeout=10, sleep=0.2) >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/utils.py", >> line 1108, in retry >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib64/python2.7/site-packages/libvirt.py", >> line 105, in openAuth >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >> libvirtError('virConnectOpenAuth() failed') >> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >> failed: authentication failed >> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control process >> exited, code=exited status=1 >> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >> Server Manager network restoration. >> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual Desktop >> Server Manager. >> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed with >> result 'dependency'. >> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered >> failed state. >> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >> >> Can someone point me in the right direction to resolve this - it seems to >> be a SASL issue perhaps? >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> >> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Vrgotic at activevideo.com Sat Feb 17 06:48:48 2018 From: M.Vrgotic at activevideo.com (Vrgotic, Marko) Date: Sat, 17 Feb 2018 06:48:48 +0000 Subject: [ovirt-users] 4.2 VM Portal -Create- VM section issue In-Reply-To: References: Message-ID: <5408C9F2-31C2-46D1-8E59-CD8F39269C67@ictv.com> Dear Tomas, My apologies for delayed update, but I had other issues to solve first. However, here are the screenshots and logs from the engine, describing the issue. I have observed this issue in Safari, Chrome and Firefox browser. During process of creating a VM, from VM Portal: * Click on New * Name and Instance * Load a template * Click Create * VM is created, but Create VM page remains active as I have not clicked on Create * If I click Create, again, message that VM with same Name already exists. Creating new VM: [cid:image001.png at 01D3A7C3.BB09F690] Create a New Virtual Machine window is still open, and if clicked on ?Create VM? following message appears. [cid:image002.png at 01D3A7C3.BB09F690] Therefore I need to select ?Cancel? to go back to VM list. [cid:image003.png at 01D3A7C3.BB09F690] Logs are attached. Please let me know if additional information is required. Kind regards Marko Vrgotic From: "Vrgotic, Marko" Date: Thursday, 25 January 2018 at 13:52 To: Tomas Jelinek Cc: users Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue Hi Tomas, Thank you. VM does get created, so I think permission are in order: I will attach them in next reply. As soon as possible I will attach all logs related. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo From: "Vrgotic, Marko" Date: Thursday, 25 January 2018 at 13:18 To: Tomas Jelinek Cc: users , "users-request at ovirt.org" Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue Hi Tomas, Thank you. VM does get created, so I think permission are in order: I will attach them in next reply. As soon as possible I will attach all logs related. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo From: Tomas Jelinek Date: Thursday, 25 January 2018 at 13:03 To: "Vrgotic, Marko" Cc: users , "users-request at ovirt.org" Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue On 24 Jan 2018 5:17 p.m., "Vrgotic, Marko" > wrote: Dear oVirt, After setting all parameters for new VM and clicking on ?Create? button, no progress status or that action is accepted is seen from webui. In addition, when closing the add VM section, I am asked if I am sure, due to changes made. Is this expected behaviour? Can something be done about? no, it is not. can you please provide the logs from the javascript console in browser? can you please make sure the user has permissions to create a vm? Kindly awaiting your reply. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 787935 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 422184 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 103264 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: creating_vm Type: application/octet-stream Size: 16290 bytes Desc: creating_vm URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: repeating_vm_creation Type: application/octet-stream Size: 1061 bytes Desc: repeating_vm_creation URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: starting_vm Type: application/octet-stream Size: 15025 bytes Desc: starting_vm URL: From msteele at telvue.com Sat Feb 17 12:09:47 2018 From: msteele at telvue.com (Mark Steele) Date: Sat, 17 Feb 2018 07:09:47 -0500 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: Thank you very much! Question - is upgrading the ovirt installation a matter of just upgrading the engine? Or are there changes that are pushed down to each host / vm? *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Sat, Feb 17, 2018 at 2:32 AM, Yaniv Kaul wrote: > > > On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele wrote: > >> We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine >> Version: 3.5.0.1-1.el6 >> > > You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , > which is a result of a default change of libvirt and was fixed in later > versions of oVirt than the one you are using. > See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, you > can probably configure it manually. > Y. > > >> >> We have four other hosts that are running this same configuration >> already. I took one host out of the cluster (forcefully) that was working >> and now it will not add back in either - throwing the same SASL error. >> >> We are looking at downgrading libvirt as I've seen that somewhere else - >> is there another version of RH I should be trying? I have a host I can put >> it on. >> >> >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> >> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: >> >>> >>> >>> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >>> >>> Hello all, >>> >>> We recently had a network event where we lost access to our storage for >>> a period of time. The Cluster basically shut down all our VM's and in the >>> process we had three HV's that went offline and would not communicate >>> properly with the cluster. >>> >>> We have since completely reinstalled CentOS on the hosts and attempted >>> to install them into the cluster with no joy. We've gotten to the point >>> where we generally get an error message in the web gui: >>> >>> >>> Which EL release and which oVirt release are you using? My guess would >>> be latest EL, with an older oVirt? >>> Y. >>> >>> >>> Stage: Misc Configuration >>> Host hv-ausa-02 installation failed. Command returned failure code 1 >>> during SSH session 'root at 10.1.90.154'. >>> >>> the following is what we are seeing in the messages log: >>> >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>> authentication failed: authentication failed >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>> 15231: error : virNetSASLSessionListMechanisms:390 : internal error: >>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>> Error -4 in server.c near line 1757) >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>> 15231: error : remoteDispatchAuthSaslInit:3411 : authentication failed: >>> authentication failed >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>> Input/output error >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>> authentication failed: authentication failed >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: >>> 15233: error : virNetSASLSessionListMechanisms:390 : internal error: >>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>> Error -4 in server.c near line 1757) >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>> 15233: error : remoteDispatchAuthSaslInit:3411 : authentication failed: >>> authentication failed >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>> Input/output error >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>> authentication failed: authentication failed >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call last): >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line >>> 219, in main >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>> tool_command[cmd]["command"](*args) >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >>> ges/vdsm/tool/upgrade_300_networks.py", line 83, in upgrade_networks >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", >>> line 112, in networks >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >>> ges/vdsm/libvirtconnection.py", line 159, in get >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >>> ges/vdsm/libvirtconnection.py", line 95, in _open_qemu_connection >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, >>> timeout=10, sleep=0.2) >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/utils.py", >>> line 1108, in retry >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib64/python2.7/site-packages/libvirt.py", >>> line 105, in openAuth >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >>> libvirtError('virConnectOpenAuth() failed') >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >>> failed: authentication failed >>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control >>> process exited, code=exited status=1 >>> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >>> Server Manager network restoration. >>> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual >>> Desktop Server Manager. >>> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed with >>> result 'dependency'. >>> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered >>> failed state. >>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >>> >>> Can someone point me in the right direction to resolve this - it seems >>> to be a SASL issue perhaps? >>> >>> *** >>> *Mark Steele* >>> CIO / VP Technical Operations | TelVue Corporation >>> TelVue - We Share Your Vision >>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>> >>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>> www.telvue.com >>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>> .com/telvue >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sat Feb 17 12:20:04 2018 From: rightkicktech at gmail.com (Alex K) Date: Sat, 17 Feb 2018 14:20:04 +0200 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: For a proper upgrade there are specific steps that you follow for each host and the engine. I usually upgrade the hosts first then the engine. If you have spare resources so as to put hosts at maintenance then the upgrade should be seamless. Also i think you need to go strp by step: 3.5 -> 3.6 -> 4.0 ... etc In case you have a similar test setup you may try it first there. On Feb 17, 2018 14:10, "Mark Steele" wrote: > Thank you very much! > > Question - is upgrading the ovirt installation a matter of just upgrading > the engine? Or are there changes that are pushed down to each host / vm? > > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 <800%20885%208886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > > On Sat, Feb 17, 2018 at 2:32 AM, Yaniv Kaul wrote: > >> >> >> On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele wrote: >> >>> We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine >>> Version: 3.5.0.1-1.el6 >>> >> >> You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , >> which is a result of a default change of libvirt and was fixed in later >> versions of oVirt than the one you are using. >> See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, you >> can probably configure it manually. >> Y. >> >> >>> >>> We have four other hosts that are running this same configuration >>> already. I took one host out of the cluster (forcefully) that was working >>> and now it will not add back in either - throwing the same SASL error. >>> >>> We are looking at downgrading libvirt as I've seen that somewhere else - >>> is there another version of RH I should be trying? I have a host I can put >>> it on. >>> >>> >>> >>> *** >>> *Mark Steele* >>> CIO / VP Technical Operations | TelVue Corporation >>> TelVue - We Share Your Vision >>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>> >>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>> www.telvue.com >>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>> .com/telvue >>> >>> On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: >>> >>>> >>>> >>>> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >>>> >>>> Hello all, >>>> >>>> We recently had a network event where we lost access to our storage for >>>> a period of time. The Cluster basically shut down all our VM's and in the >>>> process we had three HV's that went offline and would not communicate >>>> properly with the cluster. >>>> >>>> We have since completely reinstalled CentOS on the hosts and attempted >>>> to install them into the cluster with no joy. We've gotten to the point >>>> where we generally get an error message in the web gui: >>>> >>>> >>>> Which EL release and which oVirt release are you using? My guess would >>>> be latest EL, with an older oVirt? >>>> Y. >>>> >>>> >>>> Stage: Misc Configuration >>>> Host hv-ausa-02 installation failed. Command returned failure code 1 >>>> during SSH session 'root at 10.1.90.154'. >>>> >>>> the following is what we are seeing in the messages log: >>>> >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>> authentication failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>> 15231: error : virNetSASLSessionListMechanisms:390 : internal error: >>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>> Error -4 in server.c near line 1757) >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>> 15231: error : remoteDispatchAuthSaslInit:3411 : authentication >>>> failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>> Input/output error >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>> authentication failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: >>>> 15233: error : virNetSASLSessionListMechanisms:390 : internal error: >>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>> Error -4 in server.c near line 1757) >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>> 15233: error : remoteDispatchAuthSaslInit:3411 : authentication >>>> failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>> Input/output error >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>> authentication failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call last): >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line >>>> 219, in main >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>>> tool_command[cmd]["command"](*args) >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", >>>> line 83, in upgrade_networks >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in >>>> networks >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>> 159, in get >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 95, >>>> in _open_qemu_connection >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, >>>> timeout=10, sleep=0.2) >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >>>> libvirtError('virConnectOpenAuth() failed') >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >>>> failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control >>>> process exited, code=exited status=1 >>>> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >>>> Server Manager network restoration. >>>> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual >>>> Desktop Server Manager. >>>> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed with >>>> result 'dependency'. >>>> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered >>>> failed state. >>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >>>> >>>> Can someone point me in the right direction to resolve this - it seems >>>> to be a SASL issue perhaps? >>>> >>>> *** >>>> *Mark Steele* >>>> CIO / VP Technical Operations | TelVue Corporation >>>> TelVue - We Share Your Vision >>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>> >>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>> www.telvue.com >>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>> .com/telvue >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>>> >>> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Sat Feb 17 12:25:15 2018 From: msteele at telvue.com (Mark Steele) Date: Sat, 17 Feb 2018 07:25:15 -0500 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: Thank you Alex. I guess the first step is to get my existing hosts back into the cluster. I'm going to try to manually apply the patch that Yaniv sent over to see if I can get them back in. Mark *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Sat, Feb 17, 2018 at 7:20 AM, Alex K wrote: > For a proper upgrade there are specific steps that you follow for each > host and the engine. > > I usually upgrade the hosts first then the engine. If you have spare > resources so as to put hosts at maintenance then the upgrade should be > seamless. Also i think you need to go strp by step: 3.5 -> 3.6 -> 4.0 ... > etc > > In case you have a similar test setup you may try it first there. > > > > On Feb 17, 2018 14:10, "Mark Steele" wrote: > >> Thank you very much! >> >> Question - is upgrading the ovirt installation a matter of just upgrading >> the engine? Or are there changes that are pushed down to each host / vm? >> >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> >> 800.885.8886 x128 <800%20885%208886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> On Sat, Feb 17, 2018 at 2:32 AM, Yaniv Kaul wrote: >> >>> >>> >>> On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele >>> wrote: >>> >>>> We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine >>>> Version: 3.5.0.1-1.el6 >>>> >>> >>> You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , >>> which is a result of a default change of libvirt and was fixed in later >>> versions of oVirt than the one you are using. >>> See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, you >>> can probably configure it manually. >>> Y. >>> >>> >>>> >>>> We have four other hosts that are running this same configuration >>>> already. I took one host out of the cluster (forcefully) that was working >>>> and now it will not add back in either - throwing the same SASL error. >>>> >>>> We are looking at downgrading libvirt as I've seen that somewhere else >>>> - is there another version of RH I should be trying? I have a host I can >>>> put it on. >>>> >>>> >>>> >>>> *** >>>> *Mark Steele* >>>> CIO / VP Technical Operations | TelVue Corporation >>>> TelVue - We Share Your Vision >>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>> >>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>> www.telvue.com >>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>> .com/telvue >>>> >>>> On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: >>>> >>>>> >>>>> >>>>> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >>>>> >>>>> Hello all, >>>>> >>>>> We recently had a network event where we lost access to our storage >>>>> for a period of time. The Cluster basically shut down all our VM's and in >>>>> the process we had three HV's that went offline and would not communicate >>>>> properly with the cluster. >>>>> >>>>> We have since completely reinstalled CentOS on the hosts and attempted >>>>> to install them into the cluster with no joy. We've gotten to the point >>>>> where we generally get an error message in the web gui: >>>>> >>>>> >>>>> Which EL release and which oVirt release are you using? My guess would >>>>> be latest EL, with an older oVirt? >>>>> Y. >>>>> >>>>> >>>>> Stage: Misc Configuration >>>>> Host hv-ausa-02 installation failed. Command returned failure code 1 >>>>> during SSH session 'root at 10.1.90.154'. >>>>> >>>>> the following is what we are seeing in the messages log: >>>>> >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>> authentication failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>> 15231: error : virNetSASLSessionListMechanisms:390 : internal error: >>>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>>> Error -4 in server.c near line 1757) >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>> 15231: error : remoteDispatchAuthSaslInit:3411 : authentication >>>>> failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>>> Input/output error >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>> authentication failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: >>>>> 15233: error : virNetSASLSessionListMechanisms:390 : internal error: >>>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>>> Error -4 in server.c near line 1757) >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>>> 15233: error : remoteDispatchAuthSaslInit:3411 : authentication >>>>> failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>>> Input/output error >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>> authentication failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call >>>>> last): >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line >>>>> 219, in main >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>>>> tool_command[cmd]["command"](*args) >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", >>>>> line 83, in upgrade_networks >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in >>>>> networks >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>>> 159, in get >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>>> 95, in _open_qemu_connection >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, >>>>> timeout=10, sleep=0.2) >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >>>>> libvirtError('virConnectOpenAuth() failed') >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >>>>> failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control >>>>> process exited, code=exited status=1 >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >>>>> Server Manager network restoration. >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual >>>>> Desktop Server Manager. >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed >>>>> with result 'dependency'. >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered >>>>> failed state. >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >>>>> >>>>> Can someone point me in the right direction to resolve this - it seems >>>>> to be a SASL issue perhaps? >>>>> >>>>> *** >>>>> *Mark Steele* >>>>> CIO / VP Technical Operations | TelVue Corporation >>>>> TelVue - We Share Your Vision >>>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>>> >>>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>>> www.telvue.com >>>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>>> .com/telvue >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>>> >>>> >>> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From punaatua.pk at gmail.com Sat Feb 17 12:57:15 2018 From: punaatua.pk at gmail.com (Punaatua PAINT-KOUI) Date: Sat, 17 Feb 2018 02:57:15 -1000 Subject: [ovirt-users] VDSM SSL validity In-Reply-To: References: Message-ID: Any idea someone ? Le 14 f?vr. 2018 23:19, "Punaatua PAINT-KOUI" a ?crit : > Hi, > > I setup an hyperconverged solution with 3 nodes, hosted engine on > glusterfs. > We run this setup in a PCI-DSS environment. According to PCI-DSS > requirements, we are required to reduce the validity of any certificate > under 39 months. > > I saw in this link https://www.ovirt.org/develop/release-management/ > features/infra/pki/ that i can use the option > VdsCertificateValidityInYears at engine-config. > > I'm running ovirt engine 4.2.1 and i checked when i was on 4.2 how to edit > the option with engine-config --all and engine-config --list but the option > is not listed > > Am i missing something ? > > I thing i can regenerate a VDSM certificate with openssl and the CA conf > in /etc/pki/ovirt-engine on the hosted-engine but i would rather modifiy > the option for future host that I will add. > > -- > ------------------------------------- > PAINT-KOUI Punaatua > -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Sat Feb 17 17:53:42 2018 From: msteele at telvue.com (Mark Steele) Date: Sat, 17 Feb 2018 12:53:42 -0500 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: Yaniv, I have one of my developers assisting me and we are continuing to run into issues. This is a note from him: Hi, I'm trying to add a host to ovirt, but I'm running into package dependency problems. I have existing hosts that are working and integrated properly, and inspecting those, I am able to match the packages between the new host and the existing, but when I then try to add the new host to ovirt, it fails on reinstall because it's trying to install packages that are later versions. does the installation run list from ovirt-release35 002-1 have unspecified versions? The working hosts use libvirt-1.1.1-29, and vdsm-4.16.7, but it's trying to install vdsm-4.16.30, which requires a higher version of libvirt, at which point, the installation fails. is there some way I can specify which package versions the ovirt install procedure uses? or better yet, skip the package management step entirely? *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Sat, Feb 17, 2018 at 2:32 AM, Yaniv Kaul wrote: > > > On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele wrote: > >> We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine >> Version: 3.5.0.1-1.el6 >> > > You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , > which is a result of a default change of libvirt and was fixed in later > versions of oVirt than the one you are using. > See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, you > can probably configure it manually. > Y. > > >> >> We have four other hosts that are running this same configuration >> already. I took one host out of the cluster (forcefully) that was working >> and now it will not add back in either - throwing the same SASL error. >> >> We are looking at downgrading libvirt as I've seen that somewhere else - >> is there another version of RH I should be trying? I have a host I can put >> it on. >> >> >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> >> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: >> >>> >>> >>> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >>> >>> Hello all, >>> >>> We recently had a network event where we lost access to our storage for >>> a period of time. The Cluster basically shut down all our VM's and in the >>> process we had three HV's that went offline and would not communicate >>> properly with the cluster. >>> >>> We have since completely reinstalled CentOS on the hosts and attempted >>> to install them into the cluster with no joy. We've gotten to the point >>> where we generally get an error message in the web gui: >>> >>> >>> Which EL release and which oVirt release are you using? My guess would >>> be latest EL, with an older oVirt? >>> Y. >>> >>> >>> Stage: Misc Configuration >>> Host hv-ausa-02 installation failed. Command returned failure code 1 >>> during SSH session 'root at 10.1.90.154'. >>> >>> the following is what we are seeing in the messages log: >>> >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>> authentication failed: authentication failed >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>> 15231: error : virNetSASLSessionListMechanisms:390 : internal error: >>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>> Error -4 in server.c near line 1757) >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>> 15231: error : remoteDispatchAuthSaslInit:3411 : authentication failed: >>> authentication failed >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>> Input/output error >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>> authentication failed: authentication failed >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: >>> 15233: error : virNetSASLSessionListMechanisms:390 : internal error: >>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>> Error -4 in server.c near line 1757) >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>> 15233: error : remoteDispatchAuthSaslInit:3411 : authentication failed: >>> authentication failed >>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>> Input/output error >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>> authentication failed: authentication failed >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call last): >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line >>> 219, in main >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>> tool_command[cmd]["command"](*args) >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >>> ges/vdsm/tool/upgrade_300_networks.py", line 83, in upgrade_networks >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", >>> line 112, in networks >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >>> ges/vdsm/libvirtconnection.py", line 159, in get >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packa >>> ges/vdsm/libvirtconnection.py", line 95, in _open_qemu_connection >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, >>> timeout=10, sleep=0.2) >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib/python2.7/site-packages/vdsm/utils.py", >>> line 1108, in retry >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/lib64/python2.7/site-packages/libvirt.py", >>> line 105, in openAuth >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >>> libvirtError('virConnectOpenAuth() failed') >>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >>> failed: authentication failed >>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control >>> process exited, code=exited status=1 >>> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >>> Server Manager network restoration. >>> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual >>> Desktop Server Manager. >>> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed with >>> result 'dependency'. >>> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered >>> failed state. >>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >>> >>> Can someone point me in the right direction to resolve this - it seems >>> to be a SASL issue perhaps? >>> >>> *** >>> *Mark Steele* >>> CIO / VP Technical Operations | TelVue Corporation >>> TelVue - We Share Your Vision >>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>> >>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>> www.telvue.com >>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>> .com/telvue >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Sun Feb 18 06:32:49 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Sun, 18 Feb 2018 08:32:49 +0200 Subject: [ovirt-users] Unable to connect to the graphic server In-Reply-To: References: Message-ID: On Fri, Feb 16, 2018 at 5:50 AM, Alex Bartonek wrote: > > -------- Original Message -------- > On February 15, 2018 12:52 AM, Yedidyah Bar David wrote: > >>On Wed, Feb 14, 2018 at 9:20 PM, Alex Bartonek Alex at unix1337.com wrote: >>>-------- Original Message -------- >>> On February 14, 2018 2:23 AM, Yedidyah Bar David didi at redhat.com wrote: >>>>On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek Alex at unix1337.com wrote: >>>>>I've built and rebuilt about 4 oVirt servers. Consider myself pretty good >>>>> at this. LOL. >>>>> So I am setting up a oVirt server for a friend on his r710. CentOS 7, ovirt >>>>> 4.2. /etc/hosts has the correct IP and FQDN setup. >>>>> When I build a VM and try to open a console session via SPICE I am unable >>>>> to connect to the graphic server. I'm connecting from a Windows 10 box. >>>>> Using virt-manager to connect. >>>>>What happens when you try? >>>>Unable to connect to the graphic console is what the error says. Here is the .vv file other than the cert stuff in it: >>>[virt-viewer] >>> type=spice >>> host=192.168.1.83 >>> port=-1 >>> password= >>>Password is valid for 120 seconds. >>> >>>delete-this-file=1 >>> fullscreen=0 >>> title=Win_7_32bit:%d >>> toggle-fullscreen=shift+f11 >>> release-cursor=shift+f12 >>> tls-port=5900 >>> enable-smartcard=0 >>> enable-usb-autoshare=1 >>> usb-filter=-1,-1,-1,-1,0 >>> tls-ciphers=DEFAULT >>>host-subject=O=williams.com,CN=randb.williams.com >>>Port 5900 is listening by IP on the server, so that looks correct. I shut the firewall off just in case it was the issue..no go. >>> >> >> Did you verify that you can connect there manually (e.g. with telnet)? >> Can you run a sniffer on both sides to make sure traffic passes correctly? >> Can you check vdsm/libvirt logs on the host side? > > > Ok.. I must have tanked it on install with the firewall. The firewall is blocking port 5900. This is on CentOS 7. If I flush the rules, it works. Thanks for the report. Did you choose to have firewall configured automatically, or did you configure it yourself? Best regards, -- Didi From didi at redhat.com Sun Feb 18 07:05:36 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Sun, 18 Feb 2018 09:05:36 +0200 Subject: [ovirt-users] database restoration In-Reply-To: <2E4BC6F0-E5D1-44D4-B21D-B890C15C3FFC@orange.fr> References: <2E4BC6F0-E5D1-44D4-B21D-B890C15C3FFC@orange.fr> Message-ID: On Fri, Feb 16, 2018 at 1:04 PM, Fabrice Bacchella wrote: > I'm running a restoration test and getting the following log generated by engine-backup --mode=restore: Which version? Did you also get any error on stdout/stderr, or only in the log? > > pg_restore: [archiver (db)] Error while PROCESSING TOC: > pg_restore: [archiver (db)] Error from TOC entry 4274; 0 0 COMMENT EXTENSION plpgsql > pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension plpgsql > Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language'; > > > > pg_restore: WARNING: no privileges could be revoked for "public" > pg_restore: WARNING: no privileges could be revoked for "public" > pg_restore: WARNING: no privileges were granted for "public" > pg_restore: WARNING: no privileges were granted for "public" > WARNING: errors ignored on restore: 1 > > Do I need to worry, as this error is ignored ? TL;DR no need to worry, can be ignored. Details: engine-backup has a specific set of errors it ignores. You can search inside it for 'IGNORED_ERRORS' to see the list. It _also_ logs the entire pg_restore output, just for reference. (I also have a patch to log separately the list of errors not ignored: https://gerrit.ovirt.org/86395 Need to find time to verify it, probably only in 4.3...) This specific error happens due to the following: PG by default creates new databases with the extension plpgsql. This is good for us, as the engine needs it. However, if you try to manually create this extension on some db, you need admin permission for this - owning (e.g.) the db is not enough. When backing up the database, pg_dump dumps everything in it, including commands to create this extension. When engine-backup restores a database, it always uses only the credentials of the engine db user, not postgres, thus (by default) has no admin privs at this point. So this 'create' command fails, and that's ok. Best regards, -- Didi From fabrice.bacchella at orange.fr Sun Feb 18 09:11:11 2018 From: fabrice.bacchella at orange.fr (Fabrice Bacchella) Date: Sun, 18 Feb 2018 10:11:11 +0100 Subject: [ovirt-users] database restoration In-Reply-To: References: <2E4BC6F0-E5D1-44D4-B21D-B890C15C3FFC@orange.fr> Message-ID: <598BF589-FF72-48EC-9C2E-BE365638485C@orange.fr> > Le 18 f?vr. 2018 ? 08:05, Yedidyah Bar David a ?crit : > > On Fri, Feb 16, 2018 at 1:04 PM, Fabrice Bacchella > > wrote: >> I'm running a restoration test and getting the following log generated by engine-backup --mode=restore: > > Which version? 9.2, distribution package. > > Did you also get any error on stdout/stderr, or only in the log? Only the logs > TL;DR no need to worry, can be ignored. > ... Thanks, looks good to me. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Sun Feb 18 10:39:39 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Sun, 18 Feb 2018 12:39:39 +0200 Subject: [ovirt-users] Import Domain and snapshot issue ... please help !!! In-Reply-To: <0f849bf2-6766-aa64-0395-48ae6d6ade8e@pg.infn.it> References: <54306728-1b78-6634-0bcb-26f836cb1cf9@pg.infn.it> <04bfc301-e00b-2532-e2ed-b6c49470a1f2@pg.infn.it> <53ba550f-53ad-8c09-bbd7-7cce50538f94@pg.infn.it> <0f849bf2-6766-aa64-0395-48ae6d6ade8e@pg.infn.it> Message-ID: Ala, IIUC you mentioned that locked snapshot can still be removed. Can you please guide how to do that? Regards, Maor On Fri, Feb 16, 2018 at 10:50 AM, Enrico Becchetti < enrico.becchetti at pg.infn.it> wrote: > After reboot engine virtual machine task disappear but virtual disk is > still locked , > any ideas to remove that lock ? > Thanks again. > Enrico > > > l 16/02/2018 09:45, Enrico Becchetti ha scritto: > > Dear All, > Are there tools to remove this task (in attach) ? > > taskcleaner.sh it's seems doens't work: > > [root at ovirt-new dbutils]# ./taskcleaner.sh -v -r > select exists (select * from information_schema.tables where table_schema > = 'public' and table_name = 'command_entities'); > t > SELECT DeleteAllCommands(); > 6 > [root at ovirt-new dbutils]# ./taskcleaner.sh -v -R > select exists (select * from information_schema.tables where table_schema > = 'public' and table_name = 'command_entities'); > t > This will remove all async_tasks table content!!! > Caution, this operation should be used with care. Please contact support > prior to running this command > Are you sure you want to proceed? [y/n] > y > TRUNCATE TABLE async_tasks cascade; > TRUNCATE TABLE > > after that I see the same running tasks . Does It make sense ? > > Thanks > Best Regards > Enrico > > > Il 14/02/2018 15:53, Enrico Becchetti ha scritto: > > Dear All, > old snapsahots seem to be the problem. In fact domain DATA_FC running in > 3.5 had some > lvm snapshot volume. Before deactivate DATA_FC I didin't remove this > snapshots so when > I attach this volume to new ovirt 4.2 and import all vm at the same time I > also import > all snapshots but now How I can remove them ? Throught ovirt web interface > the remove > tasks running are still hang. Are there any other methods ? > Thank to following this case. > Best Regads > Enrico > > Il 14/02/2018 14:34, Maor Lipchuk ha scritto: > > Seems like all the engine logs are full with the same error. > From vdsm.log.16.xz I can see an error which might explain this failure: > > 2018-02-12 07:51:16,161+0100 INFO (ioprocess communication (40573)) > [IOProcess] Starting ioprocess (__init__:447) > 2018-02-12 07:51:16,201+0100 INFO (jsonrpc/3) [vdsm.api] FINISH > mergeSnapshots return=None from=::ffff:10.0.0.46,57032, > flow_id=fd4041b3-2301-44b0-aa65-02bd089f6568, task_id=1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 > (api:52) > 2018-02-12 07:51:16,275+0100 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Image.mergeSnapshots succeeded in 0.13 seconds (__init__:573) > 2018-02-12 07:51:16,276+0100 INFO (tasks/1) [storage.ThreadPool.WorkerThread] > START task 1be430dc-eeb0-4dc9-92df-3f5b7943c6e0 (cmd= Task.commit of >, > args=None) (threadPool:208) > 2018-02-12 07:51:16,543+0100 INFO (tasks/1) [storage.Image] > sdUUID=47b7c9aa-ef53-48bc-bb55-4a1a0ba5c8d5 vmUUID= > imgUUID=ee9ab34c-47a8-4306-95d7-dd4318c69ef5 ancestor=9cdc96de-65b7-4187-8ec3-8190b78c1825 > successor=8f595e80-1013-4c14-a2f5-252bce9526fdpostZero=False > discard=False (image:1240) > 2018-02-12 07:51:16,669+0100 ERROR (tasks/1) [storage.TaskManager.Task] > (Task='1be430dc-eeb0-4dc9-92df-3f5b7943c6e0') Unexpected error (task:875) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, > in _run > return fn(*args, **kargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, > in run > return self.cmd(*self.argslist, **self.argsdict) > File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line > 79, in wrapper > return method(self, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1853, > in mergeSnapshots > discard) > File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line > 1251, in merge > srcVol = vols[successor] > KeyError: u'8f595e80-1013-4c14-a2f5-252bce9526fd' > > Ala, maybe you know if there is any known issue with mergeSnapshots? > The usecase here are VMs from oVirt 3.5 which got registered to oVirt 4.2. > > Regards, > Maor > > > On Wed, Feb 14, 2018 at 10:11 AM, Enrico Becchetti < > enrico.becchetti at pg.infn.it> wrote: > >> Hi, >> also you can download them throught these >> links: >> >> https://owncloud.pg.infn.it/index.php/s/QpsTyGxtRTPYRTD >> https://owncloud.pg.infn.it/index.php/s/ph8pLcABe0nadeb >> >> Thanks again !!!! >> >> Best Regards >> Enrico >> >> Il 13/02/2018 14:52, Maor Lipchuk ha scritto: >> >> >> >> On Tue, Feb 13, 2018 at 3:51 PM, Maor Lipchuk >> wrote: >> >>> >>> On Tue, Feb 13, 2018 at 3:42 PM, Enrico Becchetti < >>> enrico.becchetti at pg.infn.it> wrote: >>> >>>> see the attach files please ... thanks for your attention !!! >>>> >>> >>> >>> Seems like the engine logs does not contain the entire process, can you >>> please share older logs since the import operation? >>> >> >> And VDSM logs as well from your host >> >> >>> >>> >>>> Best Regards >>>> Enrico >>>> >>>> >>>> Il 13/02/2018 14:09, Maor Lipchuk ha scritto: >>>> >>>> >>>> >>>> On Tue, Feb 13, 2018 at 1:48 PM, Enrico Becchetti < >>>> enrico.becchetti at pg.infn.it> wrote: >>>> >>>>> Dear All, >>>>> I have been using ovirt for a long time with three hypervisors and an >>>>> external engine running in a centos vm . >>>>> >>>>> This three hypervisors have HBAs and access to fiber channel storage. >>>>> Until recently I used version 3.5, then I reinstalled everything from >>>>> scratch and now I have 4.2. >>>>> >>>>> Before formatting everything, I detach the storage data domani (FC) >>>>> with the virtual machines and reimported it to the new 4.2 and all went >>>>> well. In >>>>> this domain there were virtual machines with and without snapshots. >>>>> >>>>> Now I have two problems. The first is that if I try to delete a >>>>> snapshot the process is not end successful and remains hanging and the >>>>> second problem is that >>>>> in one case I lost the virtual machine !!! >>>>> >>>> >>>> >>>> Not sure that I fully understand the scneario.' >>>> How was the virtual machine got lost if you only tried to delete a >>>> snapshot? >>>> >>>> >>>>> >>>>> So I need your help to kill the three running zombie tasks because >>>>> with taskcleaner.sh I can't do anything and then I need to know how I can >>>>> delete the old snapshots >>>>> made with the 3.5 without losing other data or without having new >>>>> processes that terminate correctly. >>>>> >>>>> If you want some log files please let me know. >>>>> >>>> >>>> >>>> Hi Enrico, >>>> >>>> Can you please attach the engine and VDSM logs >>>> >>>> >>>>> >>>>> Thank you so much. >>>>> Best Regards >>>>> Enrico >>>>> >>>>> >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>> >>>> -- >>>> _______________________________________________________________________ >>>> >>>> Enrico Becchetti Servizio di Calcolo e Reti >>>> >>>> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >>>> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >>>> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it >>>> ______________________________________________________________________ >>>> >>>> >>> >> >> -- >> _______________________________________________________________________ >> >> Enrico Becchetti Servizio di Calcolo e Reti >> >> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it >> ______________________________________________________________________ >> >> >> -- >> _______________________________________________________________________ >> >> Enrico Becchetti Servizio di Calcolo e Reti >> >> Istituto Nazionale di Fisica Nucleare - Sezione di Perugia >> Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) >> Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it >> ______________________________________________________________________ >> >> > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > _______________________________________________________________________ > > Enrico Becchetti Servizio di Calcolo e Reti > > Istituto Nazionale di Fisica Nucleare - Sezione di Perugia > Via Pascoli,c/o Dipartimento di Fisica 06123 Perugia (ITALY) > Phone:+39 075 5852777 <+39%20075%20585%202777> Mail: Enrico.Becchettipg.infn.it > ______________________________________________________________________ > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sun Feb 18 11:24:43 2018 From: rightkicktech at gmail.com (Alex K) Date: Sun, 18 Feb 2018 13:24:43 +0200 Subject: [ovirt-users] ovirt change of email alert Message-ID: Hi all, I had put a specif email alert during the deploy and then I wanted to change it. I did the following: At one of the hosts ra: hosted-engine --set-shared-config destination-emails alerts at domain.com --type=broker systemctl restart ovirt-ha-broker.service I had to do the above since changing the email from GUI did not have any effect. After the above the emails are received at the new email address but the cluster seems to have some issue recognizing the state of engine. i am flooded with emails that " EngineMaybeAway-EngineUnexpectedlyDown " I have restarted at each host also the ovirt-ha-agent.service. Did put the cluster to global maintenance and then disabled global maintenance. host agent logs I have: MainThread::ERROR::2018-02-18 11:12:20,751::hosted_engine::720::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(_initialize_sanlock) cannot get lock on host id 1: host already holds lock on a different host id One other host logs: MainThread::INFO::2018-02-18 11:20:23,692::states::682::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(score) Score is 0 due to unexpected vm shutdown at Sun Feb 18 11:15:13 2018 MainThread::INFO::2018-02-18 11:20:23,692::hosted_engine::453::ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine::(start_monitoring) Current state EngineUnexpectedlyDown (score: 0) The engine status on 3 hosts is: hosted-engine --vm-status --== Host 1 status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : v0 Host ID : 1 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 0 stopped : False Local maintenance : False crc32 : cfd15dac local_conf_timestamp : 4721144 Host timestamp : 4721144 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=4721144 (Sun Feb 18 11:20:33 2018) host-id=1 score=0 vm_conf_refresh_time=4721144 (Sun Feb 18 11:20:33 2018) conf_on_shared_storage=True maintenance=False state=EngineUnexpectedlyDown stopped=False timeout=Tue Feb 24 15:29:44 1970 --== Host 2 status ==-- conf_on_shared_storage : True Status up-to-date : True Hostname : v1 Host ID : 2 Engine status : {"reason": "vm not running on this host", "health": "bad", "vm": "down", "detail": "unknown"} Score : 0 stopped : False Local maintenance : False crc32 : 5cbcef4c local_conf_timestamp : 2499416 Host timestamp : 2499416 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=2499416 (Sun Feb 18 11:20:46 2018) host-id=2 score=0 vm_conf_refresh_time=2499416 (Sun Feb 18 11:20:46 2018) conf_on_shared_storage=True maintenance=False state=EngineUnexpectedlyDown stopped=False timeout=Thu Jan 29 22:18:42 1970 --== Host 3 status ==-- conf_on_shared_storage : True Status up-to-date : False Hostname : v2 Host ID : 3 Engine status : unknown stale-data Score : 3400 stopped : False Local maintenance : False crc32 : f064d529 local_conf_timestamp : 2920612 Host timestamp : 2920611 Extra metadata (valid at timestamp): metadata_parse_version=1 metadata_feature_version=1 timestamp=2920611 (Sun Feb 18 10:47:31 2018) host-id=3 score=3400 vm_conf_refresh_time=2920612 (Sun Feb 18 10:47:32 2018) conf_on_shared_storage=True maintenance=False state=GlobalMaintenance stopped=False Putting each host at maintenance then activating them back does not resolve the issue. Seems I have to avoid defining email address during deploy and have it set only later at GUI. How one can recover from this situation? Thanx, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sun Feb 18 11:46:08 2018 From: rightkicktech at gmail.com (Alex K) Date: Sun, 18 Feb 2018 13:46:08 +0200 Subject: [ovirt-users] Failing live migration with SPICE Message-ID: Hi all, I am running a 3 node ovirt 4.1 selft hosted setup. I have consistently observed that windows 10 VMs with SPICE console fail to live migrate. Other VMs (windows server 2016) do migrate normally. VDSM log indicates: internal error: unable to execute QEMU command 'migrate': qxl: guest bug: command not in ram bar (migration:287) 2018-02-18 11:41:59,586+0000 ERROR (migsrc/2cf3a254) [virt.vm] (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate (migration:429) if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) libvirtError: internal error: unable to execute QEMU command 'migrate': qxl: guest bug: command not in ram bar Seems as a guest agent bug for Windows 10? Is there any fix? Thanx, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sun Feb 18 11:53:13 2018 From: rightkicktech at gmail.com (Alex K) Date: Sun, 18 Feb 2018 13:53:13 +0200 Subject: [ovirt-users] Failing live migration with SPICE In-Reply-To: References: Message-ID: Seems that this is due to: https://bugzilla.redhat.com/show_bug.cgi?id=1446147 I will check If i can find newer guest agents. On Sun, Feb 18, 2018 at 1:46 PM, Alex K wrote: > Hi all, > > I am running a 3 node ovirt 4.1 selft hosted setup. > I have consistently observed that windows 10 VMs with SPICE console fail > to live migrate. Other VMs (windows server 2016) do migrate normally. > > VDSM log indicates: > > internal error: unable to execute QEMU command 'migrate': qxl: guest bug: > command not in ram bar (migration:287) > 2018-02-18 11:41:59,586+0000 ERROR (migsrc/2cf3a254) [virt.vm] > (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate > (migration:429) > if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', > dom=self) > libvirtError: internal error: unable to execute QEMU command 'migrate': > qxl: guest bug: command not in ram bar > > Seems as a guest agent bug for Windows 10? Is there any fix? > > Thanx, > Alex > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sun Feb 18 12:09:12 2018 From: rightkicktech at gmail.com (Alex K) Date: Sun, 18 Feb 2018 14:09:12 +0200 Subject: [ovirt-users] Failing live migration with SPICE In-Reply-To: References: Message-ID: I see that the latest guest tools for 4.1 are dated 27-04-2017. http://resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/ http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-1.el7.centos/ Can I use the tools from 4.2 at and install them at Windows VMs running on top 4.1? Thanx, Alex On Sun, Feb 18, 2018 at 1:53 PM, Alex K wrote: > Seems that this is due to: > > https://bugzilla.redhat.com/show_bug.cgi?id=1446147 > > I will check If i can find newer guest agents. > > On Sun, Feb 18, 2018 at 1:46 PM, Alex K wrote: > >> Hi all, >> >> I am running a 3 node ovirt 4.1 selft hosted setup. >> I have consistently observed that windows 10 VMs with SPICE console fail >> to live migrate. Other VMs (windows server 2016) do migrate normally. >> >> VDSM log indicates: >> >> internal error: unable to execute QEMU command 'migrate': qxl: guest bug: >> command not in ram bar (migration:287) >> 2018-02-18 11:41:59,586+0000 ERROR (migsrc/2cf3a254) [virt.vm] >> (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate >> (migration:429) >> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', >> dom=self) >> libvirtError: internal error: unable to execute QEMU command 'migrate': >> qxl: guest bug: command not in ram bar >> >> Seems as a guest agent bug for Windows 10? Is there any fix? >> >> Thanx, >> Alex >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Sun Feb 18 14:23:50 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Sun, 18 Feb 2018 15:23:50 +0100 Subject: [ovirt-users] How to protect SHE VM from being deleted in following setup In-Reply-To: References: Message-ID: > On 17 Feb 2018, at 08:22, Vrgotic, Marko wrote: > > Dear oVirt community, > > I have SHE on the Gluster (not managed by SHE). > Due to limitations of VM Portal, I have given couple of trusted Users, trimmed down Admin access, so that they can create VMs. > > However, this does make me bit worried, since the SHE VM could get deleted as any other VM in the pool. Why do you give them permissions to HE VM? You should be able to give them creation, but not let them delete VMs they do not own > > The SHE VM has its own storage pool, but it?s part of same Hypervisor Cluster (limitations of available HW), therefore my Users can see it and accidentally delete it ? it can happen! > > QUESTION: Any advices that could help me protect SHE VM from being deleted? There?s ?Delete Protection? property for every VM, that prevents people from accidentally deleting them. Might be enough, messing with permissions might be tricky. Thanks, michal > > Any suggestions, ideas are highly welcome. > > Thank you. > > Best regards, > Marko Vrgotic > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Sun Feb 18 14:25:34 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Sun, 18 Feb 2018 15:25:34 +0100 Subject: [ovirt-users] Failing live migration with SPICE In-Reply-To: References: Message-ID: <26EA6196-1715-40DE-B758-EE82F857FB29@redhat.com> > On 18 Feb 2018, at 13:09, Alex K wrote: > > I see that the latest guest tools for 4.1 are dated 27-04-2017. > > http://resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/ > > http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt-toolsSetup/4.2-1.el7.centos/ > > Can I use the tools from 4.2 at and install them at Windows VMs running on top 4.1? yes you can, tools are compatible and it almost always makes sense to run latest regardless what?s your ovirt cluster version > > Thanx, > Alex > > On Sun, Feb 18, 2018 at 1:53 PM, Alex K > wrote: > Seems that this is due to: > > https://bugzilla.redhat.com/show_bug.cgi?id=1446147 > > I will check If i can find newer guest agents. > > On Sun, Feb 18, 2018 at 1:46 PM, Alex K > wrote: > Hi all, > > I am running a 3 node ovirt 4.1 selft hosted setup. > I have consistently observed that windows 10 VMs with SPICE console fail to live migrate. Other VMs (windows server 2016) do migrate normally. > > VDSM log indicates: > > internal error: unable to execute QEMU command 'migrate': qxl: guest bug: command not in ram bar (migration:287) > 2018-02-18 11:41:59,586+0000 ERROR (migsrc/2cf3a254) [virt.vm] (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate (migration:429) > if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', dom=self) > libvirtError: internal error: unable to execute QEMU command 'migrate': qxl: guest bug: command not in ram bar > > Seems as a guest agent bug for Windows 10? Is there any fix? > > Thanx, > Alex > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sun Feb 18 17:50:45 2018 From: rightkicktech at gmail.com (Alex K) Date: Sun, 18 Feb 2018 19:50:45 +0200 Subject: [ovirt-users] Failing live migration with SPICE In-Reply-To: <26EA6196-1715-40DE-B758-EE82F857FB29@redhat.com> References: <26EA6196-1715-40DE-B758-EE82F857FB29@redhat.com> Message-ID: On Sun, Feb 18, 2018 at 4:25 PM, Michal Skrivanek < michal.skrivanek at redhat.com> wrote: > > > On 18 Feb 2018, at 13:09, Alex K wrote: > > I see that the latest guest tools for 4.1 are dated 27-04-2017. > > http://resources.ovirt.org/pub/ovirt-4.1/iso/oVirt-toolsSetup/ > > http://resources.ovirt.org/pub/ovirt-4.2/iso/oVirt- > toolsSetup/4.2-1.el7.centos/ > > Can I use the tools from 4.2 at and install them at Windows VMs running on > top 4.1? > > > yes you can, tools are compatible and it almost always makes sense to run > latest regardless what?s your ovirt cluster version > > Great!. I will try those. > > Thanx, > Alex > > On Sun, Feb 18, 2018 at 1:53 PM, Alex K wrote: > >> Seems that this is due to: >> >> https://bugzilla.redhat.com/show_bug.cgi?id=1446147 >> >> I will check If i can find newer guest agents. >> >> On Sun, Feb 18, 2018 at 1:46 PM, Alex K wrote: >> >>> Hi all, >>> >>> I am running a 3 node ovirt 4.1 selft hosted setup. >>> I have consistently observed that windows 10 VMs with SPICE console fail >>> to live migrate. Other VMs (windows server 2016) do migrate normally. >>> >>> VDSM log indicates: >>> >>> internal error: unable to execute QEMU command 'migrate': qxl: guest >>> bug: command not in ram bar (migration:287) >>> 2018-02-18 11:41:59,586+0000 ERROR (migsrc/2cf3a254) [virt.vm] >>> (vmId='2cf3a254-8450-44cf-b023-e0a49827dac0') Failed to migrate >>> (migration:429) >>> if ret == -1: raise libvirtError ('virDomainMigrateToURI3() failed', >>> dom=self) >>> libvirtError: internal error: unable to execute QEMU command 'migrate': >>> qxl: guest bug: command not in ram bar >>> >>> Seems as a guest agent bug for Windows 10? Is there any fix? >>> >>> Thanx, >>> Alex >>> >> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Sun Feb 18 18:03:59 2018 From: rightkicktech at gmail.com (Alex K) Date: Sun, 18 Feb 2018 20:03:59 +0200 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: Hi all, Are there any examples on using ovirt-imageio to backup a VM or where I could find details of RESTAPI for this functionality? I might attempt to write a python script for this purpose. Thanx, Alex On Tue, Feb 13, 2018 at 8:59 PM, Alex K wrote: > Thank you Nir for the below. > > I am putting some comments inline in blue. > > > On Tue, Feb 13, 2018 at 7:33 PM, Nir Soffer wrote: > >> On Wed, Jan 24, 2018 at 3:19 PM Alex K wrote: >> >>> Hi all, >>> >>> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup on >>> top glusterfs. >>> On some VMs (especially one Windows server 2016 64bit with 500 GB of >>> disk). Guest agents are installed at VMs. i almost always observe that >>> during the backup of the VM the VM is rendered unresponsive (dashboard >>> shows a question mark at the VM status and VM does not respond to ping or >>> to anything). >>> >>> For scheduled backups I use: >>> >>> https://github.com/wefixit-AT/oVirtBackup >>> >>> The script does the following: >>> >>> 1. snapshot VM (this is done ok without any failure) >>> >> >> This is a very cheap operation >> >> >>> 2. Clone snapshot (this steps renders the VM unresponsive) >>> >> >> This copy 500g of data. In gluster case, it copies 1500g of data, since >> in glusterfs, the client >> is doing the replication. >> >> Maybe your network or gluster server is too slow? Can you describe the >> network topology? >> >> Please attach also the volume info for the gluster volume, maybe it is >> not configured in the >> best way? >> > > The network is 1Gbit. The hosts (3 hosts) are decent ones and new hardware > with each host having: 32GB RAM, 16 CPU cores and 2 TB of storage in > RAID10. > The VMS hosted (7 VMs) exhibit high performance. The VMs are Windows 2016 > and Windows10. > The network topology is: two networks defined at ovirt: ovirtmgmt is for > the managment and access network and "storage" is a separate network, where > each server is connected with two network cables at a managed switch with > mode 6 load balancing. this storage network is used for gluster traffic. > Attached the volume configuration. > >> 3. Export Clone >>> >> >> This copy 500g to the export domain. If the export domain is on glusterfs >> as well, you >> copy now another 1500g of data. >> >> > Export domain a Synology NAS with NFS share. If the cloning succeeds then > export is completed ok. > >> 4. Delete clone >>> >>> 5. Delete snapshot >>> >> >> Not clear why do you need to clone the vm before you export it, you can >> save half of >> the data copies. >> > Because I cannot export the VM while it is running. It does not provide > such option. > >> >> If you 4.2, you can backup the vm *while the vm is running* by: >> - Take a snapshot >> - Get the vm ovf from the engine api >> - Download the vm disks using ovirt-imageio and store the snaphosts in >> your backup >> storage >> - Delete a snapshot >> >> In this flow, you would copy 500g. >> >> I am not aware about this option. checking quickly at site this seems > that it is still half implemented? Is there any script that I may use and > test this? I am interested to have these backups scheduled. > > >> Daniel, please correct me if I'm wrong regarding doing this online. >> >> Regardless, a vm should not become non-responsive while cloning. Please >> file a bug >> for this and attach engine, vdsm, and glusterfs logs. >> >> > Nir >> >> Do you have any similar experience? Any suggestions to address this? >>> >>> I have never seen such issue with hosted Linux VMs. >>> >>> The cluster has enough storage to accommodate the clone. >>> >>> >>> Thanx, >>> >>> Alex >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsoffer at redhat.com Sun Feb 18 18:52:58 2018 From: nsoffer at redhat.com (Nir Soffer) Date: Sun, 18 Feb 2018 18:52:58 +0000 Subject: [ovirt-users] Ovirt backups lead to unresponsive VM In-Reply-To: References: Message-ID: On Sun, Feb 18, 2018 at 8:04 PM Alex K wrote: > Are there any examples on using ovirt-imageio to backup a VM or where I > could find details of RESTAPI for this functionality? > I might attempt to write a python script for this purpose. > Here: - https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk_snapshots.py - https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/upload_disk_snapshots.py You probably need to add the vm configuration to complete the backup. > > Thanx, > Alex > > On Tue, Feb 13, 2018 at 8:59 PM, Alex K wrote: > >> Thank you Nir for the below. >> >> I am putting some comments inline in blue. >> >> >> On Tue, Feb 13, 2018 at 7:33 PM, Nir Soffer wrote: >> >>> On Wed, Jan 24, 2018 at 3:19 PM Alex K wrote: >>> >>>> Hi all, >>>> >>>> I have a cluster with 3 nodes, using ovirt 4.1 in a self hosted setup >>>> on top glusterfs. >>>> On some VMs (especially one Windows server 2016 64bit with 500 GB of >>>> disk). Guest agents are installed at VMs. i almost always observe that >>>> during the backup of the VM the VM is rendered unresponsive (dashboard >>>> shows a question mark at the VM status and VM does not respond to ping or >>>> to anything). >>>> >>>> For scheduled backups I use: >>>> >>>> https://github.com/wefixit-AT/oVirtBackup >>>> >>>> The script does the following: >>>> >>>> 1. snapshot VM (this is done ok without any failure) >>>> >>> >>> This is a very cheap operation >>> >>> >>>> 2. Clone snapshot (this steps renders the VM unresponsive) >>>> >>> >>> This copy 500g of data. In gluster case, it copies 1500g of data, since >>> in glusterfs, the client >>> is doing the replication. >>> >>> Maybe your network or gluster server is too slow? Can you describe the >>> network topology? >>> >>> Please attach also the volume info for the gluster volume, maybe it is >>> not configured in the >>> best way? >>> >> >> The network is 1Gbit. The hosts (3 hosts) are decent ones and new >> hardware with each host having: 32GB RAM, 16 CPU cores and 2 TB of storage >> in RAID10. >> The VMS hosted (7 VMs) exhibit high performance. The VMs are Windows 2016 >> and Windows10. >> The network topology is: two networks defined at ovirt: ovirtmgmt is for >> the managment and access network and "storage" is a separate network, where >> each server is connected with two network cables at a managed switch with >> mode 6 load balancing. this storage network is used for gluster traffic. >> Attached the volume configuration. >> >>> 3. Export Clone >>>> >>> >>> This copy 500g to the export domain. If the export domain is on >>> glusterfs as well, you >>> copy now another 1500g of data. >>> >>> >> Export domain a Synology NAS with NFS share. If the cloning succeeds >> then export is completed ok. >> >>> 4. Delete clone >>>> >>>> 5. Delete snapshot >>>> >>> >>> Not clear why do you need to clone the vm before you export it, you can >>> save half of >>> the data copies. >>> >> Because I cannot export the VM while it is running. It does not provide >> such option. >> >>> >>> If you 4.2, you can backup the vm *while the vm is running* by: >>> - Take a snapshot >>> - Get the vm ovf from the engine api >>> - Download the vm disks using ovirt-imageio and store the snaphosts in >>> your backup >>> storage >>> - Delete a snapshot >>> >>> In this flow, you would copy 500g. >>> >>> I am not aware about this option. checking quickly at site this seems >> that it is still half implemented? Is there any script that I may use and >> test this? I am interested to have these backups scheduled. >> >> >>> Daniel, please correct me if I'm wrong regarding doing this online. >>> >>> Regardless, a vm should not become non-responsive while cloning. Please >>> file a bug >>> for this and attach engine, vdsm, and glusterfs logs. >>> >>> >> Nir >>> >>> Do you have any similar experience? Any suggestions to address this? >>>> >>>> I have never seen such issue with hosted Linux VMs. >>>> >>>> The cluster has enough storage to accommodate the clone. >>>> >>>> >>>> Thanx, >>>> >>>> Alex >>>> >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsoffer at redhat.com Sun Feb 18 19:07:40 2018 From: nsoffer at redhat.com (Nir Soffer) Date: Sun, 18 Feb 2018 19:07:40 +0000 Subject: [ovirt-users] qcow2 images corruption In-Reply-To: References: Message-ID: On Wed, Feb 7, 2018 at 7:09 PM Nicolas Ecarnot wrote: > Hello, > > TL; DR : qcow2 images keep getting corrupted. Any workaround? > > Long version: > This discussion has already been launched by me on the oVirt and on > qemu-block mailing list, under similar circumstances but I learned > further things since months and here are some informations : > > - We are using 2 oVirt 3.6.7.5-1.el7.centos datacenters, using CentOS > 7.{2,3} hosts > - Hosts : > - CentOS 7.2 1511 : > - Kernel = 3.10.0 327 > - KVM : 2.3.0-31 > - libvirt : 1.2.17 > - vdsm : 4.17.32-1 > - CentOS 7.3 1611 : > - Kernel 3.10.0 514 > - KVM : 2.3.0-31 > - libvirt 2.0.0-10 > - vdsm : 4.17.32-1 > - Our storage is 2 Equallogic SANs connected via iSCSI on a dedicated > network > In 3.6 and iSCSI storage you have the issue of lvmetad service, activating oVirt volumes by default, and also activating guest lvs inside oVirt raw volumes. This can lead to data corruption if an lv was activated before it was extended on another host, and the lv size on the host does not reflect the actual lv size. We had many bugs related to this, check this for related bugs: https://bugzilla.redhat.com/1374545 To avoid this issue, you need to 1. edit /etc/lvm/lvm.conf global/use_lvmetad to: use_lvmetad = 0 2. disable and mask these services: - lvm2-lvmetad.socket - lvm2-lvmetad.service Note that this will may cause warnings from systemd during boot, the warnings are harmless: https://bugzilla.redhat.com/1462792 For extra safety and better performance, you should also setup lvm filter on all hosts. Check this for example how it is done in 4.x: https://www.ovirt.org/blog/2017/12/lvm-configuration-the-easy-way/ Since you run 3.6 you will have to setup the filter manually in the same way. Nir > - Depends on weeks, but all in all, there are around 32 hosts, 8 storage > domains and for various reasons, very few VMs (less than 200). > - One peculiar point is that most of our VMs are provided an additional > dedicated network interface that is iSCSI-connected to some volumes of > our SAN - these volumes not being part of the oVirt setup. That could > lead to a lot of additional iSCSI traffic. > > From times to times, a random VM appears paused by oVirt. > Digging into the oVirt engine logs, then into the host vdsm logs, it > appears that the host considers the qcow2 image as corrupted. > Along what I consider as a conservative behavior, vdsm stops any > interaction with this image and marks it as paused. > Any try to unpause it leads to the same conservative pause. > > After having found (https://access.redhat.com/solutions/1173623) the > right logical volume hosting the qcow2 image, I can run qemu-img check > on it. > - On 80% of my VMs, I find no errors. > - On 15% of them, I find Leaked cluster errors that I can correct using > "qemu-img check -r all" > - On 5% of them, I find Leaked clusters errors and further fatal errors, > which can not be corrected with qemu-img. > In rare cases, qemu-img can correct them, but destroys large parts of > the image (becomes unusable), and on other cases it can not correct them > at all. > > Months ago, I already sent a similar message but the error message was > about No space left on device > (https://www.mail-archive.com/qemu-block at gnu.org/msg00110.html). > > This time, I don't have this message about space, but only corruption. > > I kept reading and found a similar discussion in the Proxmox group : > https://lists.ovirt.org/pipermail/users/2018-February/086750.html > > > https://forum.proxmox.com/threads/qcow2-corruption-after-snapshot-or-heavy-disk-i-o.32865/page-2 > > What I read similar to my case is : > - usage of qcow2 > - heavy disk I/O > - using the virtio-blk driver > > In the proxmox thread, they tend to say that using virtio-scsi is the > solution. Having asked this question to oVirt experts > (https://lists.ovirt.org/pipermail/users/2018-February/086753.html) but > it's not clear the driver is to blame. > > I agree with the answer Yaniv Kaul gave to me, saying I have to properly > report the issue, so I'm longing to know which peculiar information I > can give you now. > > As you can imagine, all this setup is in production, and for most of the > VMs, I can not "play" with them. Moreover, we launched a campaign of > nightly stopping every VM, qemu-img check them one by one, then boot. > So it might take some time before I find another corrupted image. > (which I'll preciously store for debug) > > Other informations : We very rarely do snapshots, but I'm close to > imagine that automated migrations of VMs could trigger similar behaviors > on qcow2 images. > > Last point about the versions we use : yes that's old, yes we're > planning to upgrade, but we don't know when. > > Regards, > > -- > Nicolas ECARNOT > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alex at unix1337.com Sun Feb 18 22:46:29 2018 From: Alex at unix1337.com (Alex Bartonek) Date: Sun, 18 Feb 2018 17:46:29 -0500 Subject: [ovirt-users] Unable to connect to the graphic server In-Reply-To: References: Message-ID: -------- Original Message -------- On February 18, 2018 12:32 AM, Yedidyah Bar David wrote: >On Fri, Feb 16, 2018 at 5:50 AM, Alex Bartonek Alex at unix1337.com wrote: >>-------- Original Message -------- >> On February 15, 2018 12:52 AM, Yedidyah Bar David didi at redhat.com wrote: >>>On Wed, Feb 14, 2018 at 9:20 PM, Alex Bartonek Alex at unix1337.com wrote: >>>>-------- Original Message -------- >>>> On February 14, 2018 2:23 AM, Yedidyah Bar David didi at redhat.com wrote: >>>>>On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek Alex at unix1337.com wrote: >>>>>>I've built and rebuilt about 4 oVirt servers. Consider myself pretty good >>>>>> at this. LOL. >>>>>> So I am setting up a oVirt server for a friend on his r710. CentOS 7, ovirt >>>>>> 4.2. /etc/hosts has the correct IP and FQDN setup. >>>>>> When I build a VM and try to open a console session via SPICE I am unable >>>>>> to connect to the graphic server. I'm connecting from a Windows 10 box. >>>>>> Using virt-manager to connect. >>>>>> What happens when you try? >>>>>> Unable to connect to the graphic console is what the error says. Here is the .vv file other than the cert stuff in it: >>>>>> [virt-viewer] >>>>>> type=spice >>>>>> host=192.168.1.83 >>>>>> port=-1 >>>>>> password= >>>>>> Password is valid for 120 seconds. >>>>>> >>>>>delete-this-file=1 >>>> fullscreen=0 >>>> title=Win_7_32bit:%d >>>> toggle-fullscreen=shift+f11 >>>> release-cursor=shift+f12 >>>> tls-port=5900 >>>> enable-smartcard=0 >>>> enable-usb-autoshare=1 >>>> usb-filter=-1,-1,-1,-1,0 >>>> tls-ciphers=DEFAULT >>>>host-subject=O=williams.com,CN=randb.williams.com >>>> Port 5900 is listening by IP on the server, so that looks correct. I shut the firewall off just in case it was the issue..no go. >>>>Did you verify that you can connect there manually (e.g. with telnet)? >>> Can you run a sniffer on both sides to make sure traffic passes correctly? >>> Can you check vdsm/libvirt logs on the host side? >>>Ok.. I must have tanked it on install with the firewall. The firewall is blocking port 5900. This is on CentOS 7. If I flush the rules, it works. >> > > Thanks for the report. > > Did you choose to have firewall configured automatically, or did you > configure it yourself? I did configure the host to manage the firewall. Just to make sure, I deleted the host, recreated and still had the issue. I ended up making the firewall rule manually which took care of it. Never had to do that before. -Alex From didi at redhat.com Mon Feb 19 06:56:05 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 19 Feb 2018 08:56:05 +0200 Subject: [ovirt-users] Unable to connect to the graphic server In-Reply-To: References: Message-ID: On Mon, Feb 19, 2018 at 12:46 AM, Alex Bartonek wrote: > > > -------- Original Message -------- > On February 18, 2018 12:32 AM, Yedidyah Bar David wrote: > > >On Fri, Feb 16, 2018 at 5:50 AM, Alex Bartonek Alex at unix1337.com wrote: > >>-------- Original Message -------- > >> On February 15, 2018 12:52 AM, Yedidyah Bar David didi at redhat.com wrote: > >>>On Wed, Feb 14, 2018 at 9:20 PM, Alex Bartonek Alex at unix1337.com wrote: > >>>>-------- Original Message -------- > >>>> On February 14, 2018 2:23 AM, Yedidyah Bar David didi at redhat.com wrote: > >>>>>On Wed, Feb 14, 2018 at 5:20 AM, Alex Bartonek Alex at unix1337.com wrote: > >>>>>>I've built and rebuilt about 4 oVirt servers. Consider myself pretty good > >>>>>> at this. LOL. > >>>>>> So I am setting up a oVirt server for a friend on his r710. CentOS 7, ovirt > >>>>>> 4.2. /etc/hosts has the correct IP and FQDN setup. > >>>>>> When I build a VM and try to open a console session via SPICE I am unable > >>>>>> to connect to the graphic server. I'm connecting from a Windows 10 box. > >>>>>> Using virt-manager to connect. > >>>>>> What happens when you try? > >>>>>> Unable to connect to the graphic console is what the error says. Here is the .vv file other than the cert stuff in it: > >>>>>> [virt-viewer] > >>>>>> type=spice > >>>>>> host=192.168.1.83 > >>>>>> port=-1 > >>>>>> password= > >>>>>> Password is valid for 120 seconds. > >>>>>> > >>>>>delete-this-file=1 > >>>> fullscreen=0 > >>>> title=Win_7_32bit:%d > >>>> toggle-fullscreen=shift+f11 > >>>> release-cursor=shift+f12 > >>>> tls-port=5900 > >>>> enable-smartcard=0 > >>>> enable-usb-autoshare=1 > >>>> usb-filter=-1,-1,-1,-1,0 > >>>> tls-ciphers=DEFAULT > >>>>host-subject=O=williams.com,CN=randb.williams.com > >>>> Port 5900 is listening by IP on the server, so that looks correct. I shut the firewall off just in case it was the issue..no go. > >>>>Did you verify that you can connect there manually (e.g. with telnet)? > >>> Can you run a sniffer on both sides to make sure traffic passes correctly? > >>> Can you check vdsm/libvirt logs on the host side? > >>>Ok.. I must have tanked it on install with the firewall. The firewall is blocking port 5900. This is on CentOS 7. If I flush the rules, it works. > >> > > > > Thanks for the report. > > > > Did you choose to have firewall configured automatically, or did you > > configure it yourself? > > > I did configure the host to manage the firewall. Just to make sure, I deleted the host, recreated and still had the issue. I ended up making the firewall rule manually which took care of it. Never had to do that before. Can you please share relevant logs? On the engine in /var/log/ovirt-engine/host-deploy and /var/log/ovirt-engine/engine.log. Thanks! Also adding Ondra. Best regards, -- Didi From Jeremy_Tourville at hotmail.com Sun Feb 18 16:32:34 2018 From: Jeremy_Tourville at hotmail.com (Jeremy Tourville) Date: Sun, 18 Feb 2018 16:32:34 +0000 Subject: [ovirt-users] Spice Client Connection Issues Using aSpice Message-ID: Hello, I am having trouble connecting to my guest vm (Kali Linux) which is running spice. My engine is running version: 4.2.1.7-1.el7.centos. I am using oVirt Node as my host running version: 4.2.1.1. I have taken the following steps to try and get everything running properly. 1. Download the root CA certificate https://ovirtengine.lan/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA 2. Edit the vm and define the graphical console entries. Video type is set to QXL, Graphics protocol is spice, USB support is enabled. 3. Install the guest agent in Debian per the instructions here - https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-debian/ It is my understanding that installing the guest agent will also install the virt IO device drivers. 4. Install the spice-vdagent per the instructions here - https://www.ovirt.org/documentation/how-to/guest-agent/install-the-spice-guest-agent/ 5. On the aSpice client I have imported the CA certficate from step 1 above. I defined the connection using the IP of my Node and TLS port 5901. To troubleshoot my connection issues I confirmed the port being used to listen. virsh # domdisplay Kali spice://172.30.42.12?tls-port=5901 I see the following when attempting to connect. tail -f /var/log/libvirt/qemu/Kali.log 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:s3_pkt.c:1493:SSL alert number 80 ((null):27595): Spice-Warning **: reds_stream.c:379:reds_stream_ssl_accept: SSL_accept failed, error=1 I came across some documentation that states in the caveat section "Certificate of spice SSL should be separate certificate." https://www.ovirt.org/develop/release-management/features/infra/pki/ Is this still the case for version 4? The document references version 3.2 and 3.3. If so, how do I generate a new certificate for use with spice? Please let me know if you require further info to troubleshoot, I am happy to provide it. Many thanks in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Markus.Schaufler at ooe.gv.at Mon Feb 19 07:58:34 2018 From: Markus.Schaufler at ooe.gv.at (Markus.Schaufler at ooe.gv.at) Date: Mon, 19 Feb 2018 07:58:34 +0000 Subject: [ovirt-users] Install Windows VM issues Message-ID: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> Hi! I'm new here - hope you can forgive my "newbie questions". I want to install a Server 2016 - so I uploaded both the Windows ISO and the virtio drivers iso to the ISO Domain location. In the VM Options I can choose both ISO files. But as refered in a Howto, I need to use a Floppy device with a flv file. I found the FLV drivers file, but I cannot find any Floppy device - there's no option to choose. So I tried to add a second CD-Rom because in Proxmox that already did work. But I cannot find any option to add a second CD-Rom too. Any idea how I can provide the drivers for the windows installation? Thanks for any help! Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From jonbae77 at gmail.com Mon Feb 19 08:11:00 2018 From: jonbae77 at gmail.com (Jon bae) Date: Mon, 19 Feb 2018 09:11:00 +0100 Subject: [ovirt-users] Install Windows VM issues In-Reply-To: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> References: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> Message-ID: Hi Markus, you need to use the "run onces" option, to be able to insert the floppy. Jonathan 2018-02-19 8:58 GMT+01:00 : > Hi! > > > > I?m new here ? hope you can forgive my ?newbie questions?. > > > > I want to install a Server 2016 ? so I uploaded both the Windows ISO and > the virtio drivers iso to the ISO Domain location. In the VM Options I can > choose both ISO files. > > But as refered in a Howto, I need to use a Floppy device with a flv file. > I found the FLV drivers file, but I cannot find any Floppy device ? there?s > no option to choose. > > So I tried to add a second CD-Rom because in Proxmox that already did > work. But I cannot find any option to add a second CD-Rom too. > > > > Any idea how I can provide the drivers for the windows installation? > > > > Thanks for any help! > > Markus > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.leopold at meduniwien.ac.at Mon Feb 19 08:43:41 2018 From: matthias.leopold at meduniwien.ac.at (Matthias Leopold) Date: Mon, 19 Feb 2018 09:43:41 +0100 Subject: [ovirt-users] Install Windows VM issues In-Reply-To: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> References: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> Message-ID: <7f68aab0-ed06-62bc-c2b1-a02cd8b96278@meduniwien.ac.at> Hi Markus, you don't need a second CD Rom or floppy drive. Choose "Run once" - "Boot Options" - "Attach CD" to attach the Windows ISO. When the install process gets to detecting storage devices you have to choose "Change CD", where you insert the VirtIO ISO. When the hard disk is detected you revert back to the Windows Installer ISO for the rest of the install process. Good luck Matthias Am 2018-02-19 um 08:58 schrieb Markus.Schaufler at ooe.gv.at: > Hi! > > I?m new here ? hope you can forgive my ?newbie questions?. > > I want to install a Server 2016 ? so I uploaded both the Windows ISO and > the virtio drivers iso to the ISO Domain location. In the VM Options I > can choose both ISO files. > > But as refered in a Howto, I need to use a Floppy device with a flv > file. I found the FLV drivers file, but I cannot find any Floppy device > ? there?s no option to choose. > > So I tried to add a second CD-Rom because in Proxmox that already did > work. But I cannot find any option to add a second CD-Rom too. > > Any idea how I can provide the drivers for the windows installation? > > Thanks for any help! > > Markus > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Matthias Leopold IT Systems & Communications Medizinische Universit?t Wien Spitalgasse 23 / BT 88 /Ebene 00 A-1090 Wien Tel: +43 1 40160-21241 Fax: +43 1 40160-921200 From M.Vrgotic at activevideo.com Mon Feb 19 08:49:50 2018 From: M.Vrgotic at activevideo.com (Vrgotic, Marko) Date: Mon, 19 Feb 2018 08:49:50 +0000 Subject: [ovirt-users] How to protect SHE VM from being deleted in following setup In-Reply-To: References: Message-ID: <5FBC2B5F-E3A2-4B8A-95A2-06F6938DEBAC@ictv.com> Hi Michal, This is exactly what I would expect to achieve by default, if creating regular user. However, these users are allowed Admin access, and therefore, I have created ?very? limited accounts, so that they can Create,Manipulate,Delete VMs, but I do not see how and where I can set that this is allowed only for VMs they own. Here are the screenshots of the Role ?AWS VM Operator? I created for them: [cid:image001.png at 01D3A966.FB0232E0] [cid:image002.png at 01D3A966.FB0232E0] [cid:image003.png at 01D3A966.FB0232E0] Following one actually contains what they are allowed to: [cid:image004.png at 01D3A966.FB0232E0] What am I missing? Kindly awaiting your reply. Marko From: Michal Skrivanek Date: Sunday, 18 February 2018 at 15:23 To: "Vrgotic, Marko" Cc: users Subject: Re: [ovirt-users] How to protect SHE VM from being deleted in following setup Why do you give them permissions to HE VM? You should be able to give them creation, but not let them delete VMs they do not own -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 53501 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 41789 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 13484 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 26559 bytes Desc: image004.png URL: From jonbae77 at gmail.com Mon Feb 19 08:50:05 2018 From: jonbae77 at gmail.com (Jon bae) Date: Mon, 19 Feb 2018 09:50:05 +0100 Subject: [ovirt-users] Fwd: Install Windows VM issues In-Reply-To: <9D6F18D2AC0D5245BE068C2BEBC06946284C9D@msli01-202.res01.ads.ooe.local> References: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> <9D6F18D2AC0D5245BE068C2BEBC06946284C9D@msli01-202.res01.ads.ooe.local> Message-ID: Hi Markus, I install Windows 2016 just for two weeks, so in general it must work. Have you insert the windows iso to the dvd drive and the floppy file to the floppy? After booting and selecting installation, a bit after must come a menu where you normally choice the hard drive for installation, but this is empty. There is a button with install driver or something similar. When you click on it, you can navigate to the floppy drive. When this is not working, you can also change your HDD to IDE, install Windows. Add a second drive with virtIO, insert den driver CD install the driver. Reboot, remove the second HDD and change the main disk to virtIO to. But this is a bit hacky :). Jonathan Hi Jonathan, thanks for your quick reply! I startet with run once and attached the flv to the floppy, but still there?s no floppy or cd-rom drive with the drivers on it. Any hint on this? Markus Hi Markus, you need to use the "run onces" option, to be able to insert the floppy. Jonathan 2018-02-19 8:58 GMT+01:00 : Hi! I?m new here ? hope you can forgive my ?newbie questions?. I want to install a Server 2016 ? so I uploaded both the Windows ISO and the virtio drivers iso to the ISO Domain location. In the VM Options I can choose both ISO files. But as refered in a Howto, I need to use a Floppy device with a flv file. I found the FLV drivers file, but I cannot find any Floppy device ? there?s no option to choose. So I tried to add a second CD-Rom because in Proxmox that already did work. But I cannot find any option to add a second CD-Rom too. Any idea how I can provide the drivers for the windows installation? Thanks for any help! Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From Markus.Schaufler at ooe.gv.at Mon Feb 19 08:51:28 2018 From: Markus.Schaufler at ooe.gv.at (Markus.Schaufler at ooe.gv.at) Date: Mon, 19 Feb 2018 08:51:28 +0000 Subject: [ovirt-users] Install Windows VM issues In-Reply-To: <7f68aab0-ed06-62bc-c2b1-a02cd8b96278@meduniwien.ac.at> References: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> <7f68aab0-ed06-62bc-c2b1-a02cd8b96278@meduniwien.ac.at> Message-ID: <9D6F18D2AC0D5245BE068C2BEBC06946284ECB@msli01-202.res01.ads.ooe.local> Hi Matthias, thanks for your quick response. I had already tried to use "Change CD" - but there's no effect at all. I also couldn't find any error messages in the log files. Also the "run once" option using the floppy does not work as there's simply no floppy in the setup guide visible (or any other device with the drivers on it). Any other idea? Thanks -----Urspr?ngliche Nachricht----- Von: Matthias Leopold [mailto:matthias.leopold at meduniwien.ac.at] Gesendet: Montag, 19. Februar 2018 09:44 An: Schaufler, Markus ; users at ovirt.org Betreff: Re: [ovirt-users] Install Windows VM issues Hi Markus, you don't need a second CD Rom or floppy drive. Choose "Run once" - "Boot Options" - "Attach CD" to attach the Windows ISO. When the install process gets to detecting storage devices you have to choose "Change CD", where you insert the VirtIO ISO. When the hard disk is detected you revert back to the Windows Installer ISO for the rest of the install process. Good luck Matthias Am 2018-02-19 um 08:58 schrieb Markus.Schaufler at ooe.gv.at: > Hi! > > I'm new here - hope you can forgive my "newbie questions". > > I want to install a Server 2016 - so I uploaded both the Windows ISO > and the virtio drivers iso to the ISO Domain location. In the VM > Options I can choose both ISO files. > > But as refered in a Howto, I need to use a Floppy device with a flv > file. I found the FLV drivers file, but I cannot find any Floppy > device - there's no option to choose. > > So I tried to add a second CD-Rom because in Proxmox that already did > work. But I cannot find any option to add a second CD-Rom too. > > Any idea how I can provide the drivers for the windows installation? > > Thanks for any help! > > Markus > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Matthias Leopold IT Systems & Communications Medizinische Universit?t Wien Spitalgasse 23 / BT 88 /Ebene 00 A-1090 Wien Tel: +43 1 40160-21241 Fax: +43 1 40160-921200 From Markus.Schaufler at ooe.gv.at Mon Feb 19 09:11:54 2018 From: Markus.Schaufler at ooe.gv.at (Markus.Schaufler at ooe.gv.at) Date: Mon, 19 Feb 2018 09:11:54 +0000 Subject: [ovirt-users] Fwd: Install Windows VM issues In-Reply-To: References: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> <9D6F18D2AC0D5245BE068C2BEBC06946284C9D@msli01-202.res01.ads.ooe.local> Message-ID: <9D6F18D2AC0D5245BE068C2BEBC06946284EE6@msli01-202.res01.ads.ooe.local> Hi! I managed to get it working ? the only thing that worked for me is changing the CD at the Webinterface -> Virtual Machines -> mark the VM -> klick on the ?three points? (further options) on the right side --> and use there the option ?Change CD?. The other ?Change CD? option directly at the machine via spice or vnc does not work at all. Thanks all! Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Jon bae Gesendet: Montag, 19. Februar 2018 09:50 An: users Betreff: [ovirt-users] Fwd: Install Windows VM issues Hi Markus, I install Windows 2016 just for two weeks, so in general it must work. Have you insert the windows iso to the dvd drive and the floppy file to the floppy? After booting and selecting installation, a bit after must come a menu where you normally choice the hard drive for installation, but this is empty. There is a button with install driver or something similar. When you click on it, you can navigate to the floppy drive. When this is not working, you can also change your HDD to IDE, install Windows. Add a second drive with virtIO, insert den driver CD install the driver. Reboot, remove the second HDD and change the main disk to virtIO to. But this is a bit hacky :). Jonathan Hi Jonathan, thanks for your quick reply! I startet with run once and attached the flv to the floppy, but still there?s no floppy or cd-rom drive with the drivers on it. Any hint on this? Markus Hi Markus, you need to use the "run onces" option, to be able to insert the floppy. Jonathan 2018-02-19 8:58 GMT+01:00 >: Hi! I?m new here ? hope you can forgive my ?newbie questions?. I want to install a Server 2016 ? so I uploaded both the Windows ISO and the virtio drivers iso to the ISO Domain location. In the VM Options I can choose both ISO files. But as refered in a Howto, I need to use a Floppy device with a flv file. I found the FLV drivers file, but I cannot find any Floppy device ? there?s no option to choose. So I tried to add a second CD-Rom because in Proxmox that already did work. But I cannot find any option to add a second CD-Rom too. Any idea how I can provide the drivers for the windows installation? Thanks for any help! Markus -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.leopold at meduniwien.ac.at Mon Feb 19 09:17:37 2018 From: matthias.leopold at meduniwien.ac.at (Matthias Leopold) Date: Mon, 19 Feb 2018 10:17:37 +0100 Subject: [ovirt-users] Install Windows VM issues In-Reply-To: <9D6F18D2AC0D5245BE068C2BEBC06946284ECB@msli01-202.res01.ads.ooe.local> References: <9D6F18D2AC0D5245BE068C2BEBC06946284C58@msli01-202.res01.ads.ooe.local> <7f68aab0-ed06-62bc-c2b1-a02cd8b96278@meduniwien.ac.at> <9D6F18D2AC0D5245BE068C2BEBC06946284ECB@msli01-202.res01.ads.ooe.local> Message-ID: Am 2018-02-19 um 09:51 schrieb Markus.Schaufler at ooe.gv.at: > Hi Matthias, > > thanks for your quick response. > I had already tried to use "Change CD" - but there's no effect at all. I also couldn't find any error messages in the log files. > > Also the "run once" option using the floppy does not work as there's simply no floppy in the setup guide visible (or any other device with the drivers on it). There's no need for a floppy device. Steps to take (with oVirt 4.1.9 and Virtio-SCSI disk in VM): 1. "Run once" - "Boot Options" - "Attach CD" to attach the Windows 2016R2 ISO 2. Windows setup starts in console window 3. Choose "Install Windows only" 4. Windows setup presents "Where do you want to install Windows?" 5. oVirt "Change CD" - choose oVirt-toolsSetup-4.15.fc24.iso (has to be in ISO domain) 6. Windows setup "Load driver" - "No signed drivers found" - OK - "Browse" - navigate to CD ROM drive "vioscsi - win2016r2 - amd64" folder - driver is highlighted - Next 7. Windows setup presents "Drive 0 Unallocated Space" (with warning: "Windows can't be installed on this device") 8. oVirt "Change CD" - choose Windows 2016R2 ISO 9. Windows setup "Refresh" 10. Warning disappears - Press "Next" Matthias From spfma.tech at e.mail.fr Mon Feb 19 09:17:07 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Mon, 19 Feb 2018 10:17:07 +0100 Subject: [ovirt-users] Disk image upload pausing Message-ID: <20180219091707.5A406E2266@smtp01.mail.de> Hi, I am trying to build a new vm based on a vhd image coming from a windows machine. I converted the image to raw, and I am now trying to import it in the engine. After setting up the CA in my browser, the import process starts but stops after a while with "paused by system" status. I can resume it, but it pauses without transferring more. The engine logs don't explain much, I see a line for the start and the next one for the pause. My network seems to work correctly, and I have plenty of space in the storage domain. What can cause the process to pause ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjelinek at redhat.com Mon Feb 19 10:19:04 2018 From: tjelinek at redhat.com (Tomas Jelinek) Date: Mon, 19 Feb 2018 11:19:04 +0100 Subject: [ovirt-users] Spice Client Connection Issues Using aSpice In-Reply-To: References: Message-ID: On Sun, Feb 18, 2018 at 5:32 PM, Jeremy Tourville < Jeremy_Tourville at hotmail.com> wrote: > Hello, > > I am having trouble connecting to my guest vm (Kali Linux) which is > running spice. My engine is running version: 4.2.1.7-1.el7.centos. > > I am using oVirt Node as my host running version: 4.2.1.1. > > > I have taken the following steps to try and get everything running > properly. > > 1. Download the root CA certificate https:// > ovirtengine.lan/ovirt-engine/services/pki-resource? > resource=ca-certificate&format=X509-PEM-CA > > 2. Edit the vm and define the graphical console entries. Video type > is set to QXL, Graphics protocol is spice, USB support is enabled. > 3. Install the guest agent in Debian per the instructions here - > https://www.ovirt.org/documentation/how-to/guest- > agent/install-the-guest-agent-in-debian/ > > It is my understanding that installing the guest agent will also install > the virt IO device drivers. > 4. Install the spice-vdagent per the instructions here - > https://www.ovirt.org/documentation/how-to/guest- > agent/install-the-spice-guest-agent/ > > 5. On the aSpice client I have imported the CA certficate from step 1 > above. I defined the connection using the IP of my Node and TLS port 5901. > > are you really using aSPICE client (e.g. the android SPICE client?). If yes, maybe you want to try to open it using moVirt ( https://play.google.com/store/apps/details?id=org.ovirt.mobile.movirt&hl=en) which delegates the console to aSPICE but configures everything including the certificates on it. Should be much simpler than configuring it by hand.. > > To troubleshoot my connection issues I confirmed the port being used to > listen. > virsh # domdisplay Kali > spice://172.30.42.12?tls-port=5901 > > I see the following when attempting to connect. > tail -f /var/log/libvirt/qemu/Kali.log > > 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert > internal error:s3_pkt.c:1493:SSL alert number 80 > ((null):27595): Spice-Warning **: reds_stream.c:379:reds_stream_ssl_accept: > SSL_accept failed, error=1 > > I came across some documentation that states in the caveat section "Certificate > of spice SSL should be separate certificate." > https://www.ovirt.org/develop/release-management/features/infra/pki/ > > Is this still the case for version 4? The document references version 3.2 > and 3.3. If so, how do I generate a new certificate for use with spice? > Please let me know if you require further info to troubleshoot, I am happy > to provide it. Many thanks in advance. > > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjelinek at redhat.com Mon Feb 19 10:26:28 2018 From: tjelinek at redhat.com (Tomas Jelinek) Date: Mon, 19 Feb 2018 11:26:28 +0100 Subject: [ovirt-users] VM Portal - ADD Nic In-Reply-To: References: Message-ID: On Fri, Feb 16, 2018 at 11:37 AM, Thomas Fecke wrote: > Hey Guys, > > > > We got about 50 Users and 50 VLANS. Every User has his own Vlan. > > > > With 4.1 they could login in that User Portal. Select an Template or > create a new VM. Add a Disk and connect to there Nic. > this two issues are being tracked here: add disk: https://github.com/oVirt/ovirt-web-ui/issues/489 add nic: https://github.com/oVirt/ovirt-web-ui/issues/488 we are starting to develop them very soon (maybe today :) ) > > > I see that is no option to add a Disk anymore with 4.2 -> okay that?s fine > for me > > > > So they just can use Templates. But, there is now option to add the VM to > a nic. So I guess the Template nic is being used. > > > > But our Templates don?t got a nic because the user has his own networks. > > > > That mean I need to add about XX more Templates with every nic in it? > > > > Oh common J > > > > No way to add a nic via VM Portal? That really make the VM Portal unusable > for us > > > > We can?t be the only one using Templates like that. Now every VM set up in > VM Portal is in one network, that?s not good or do I miss something? > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.tambovskiy at gmail.com Mon Feb 19 11:13:34 2018 From: artem.tambovskiy at gmail.com (Artem Tambovskiy) Date: Mon, 19 Feb 2018 14:13:34 +0300 Subject: [ovirt-users] why host is not capable to run HE? Message-ID: Hello, Last weekend my cluster suffered form a massive power outage due to human mistake. I'm using SHE setup with Gluster, I managed to bring the cluster up quickly, but once again I have a problem with duplicated host_id ( https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and due to this second host is not capable to run HE. I manually updated file hosted_engine.conf with correct host_id and restarted agent & broker - no effect. Than I rebooted the host itself - still no changes. How to fix this issue? Regards, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From awels at redhat.com Mon Feb 19 13:54:43 2018 From: awels at redhat.com (Alexander Wels) Date: Mon, 19 Feb 2018 08:54:43 -0500 Subject: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? In-Reply-To: References: Message-ID: <2485548.xbxGyNV15t@awels> On Friday, February 16, 2018 6:31:10 PM EST Zip wrote: > Are there any updated docs for the WebUI Plugins API? > Unfortunately no, I haven't had a chance to create updated documentation. However the first two links are mostly still accurate as we haven't done any major changes to the API. Some things to note that are different from the API documentation in https:// www.ovirt.org/develop/release-management/features/ux/uiplugins/ for 4.2: - alignRight no longer has any effect, as the UI in 4.2 no longer respects it. - none of the systemTreeNode selection code does anything (since there is no more system tree) - As noted in the documentation itself the RestApiSessionAcquired is no longer available as we have a proper SSO mechanism that you can utilize at this point. - Main Tabs are now called Main Views (but the api still calls them main tabs, so use the apis described). And sub tabs are now called detail tabs, but the same thing the API hasn't changed the naming convention so use subTabs. - mainTabActionButton location property no longer has any meaning and is ignored. That is it I think, we tried to make it so existing plugins would remain working even if some options no longer mean anything. > I have found the following which all appear to be old and no longer working? > > https://www.ovirt.org/documentation/admin-guide/appe-oVirt_User_Interface_Pl > ugins/ > https://www.ovirt.org/develop/release-management/features/ux/uiplugins/ > http://resources.ovirt.org/old-site-files/UI_Plugins_at_oVirt_Workshop_Sunny > vale_2013.pdf > > Thanks > > Zip From Markus.Schaufler at ooe.gv.at Mon Feb 19 14:10:24 2018 From: Markus.Schaufler at ooe.gv.at (Markus.Schaufler at ooe.gv.at) Date: Mon, 19 Feb 2018 14:10:24 +0000 Subject: [ovirt-users] WG: IPMI config In-Reply-To: References: Message-ID: <9D6F18D2AC0D5245BE068C2BEBC06946284F62@msli01-202.res01.ads.ooe.local> Hi! When configuring Powermanagement respectively Fence Agent I get following error: Any idea on this? 2018-02-19 15:01:26,625+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.FenceVdsVDSCommand] (default task-14) [7ab055b8-7afa-4495-a287-b3b66fd6a81e] START, FenceVdsVDSCommand(HostName = VIGT01-101.res01.ads.ooe.local, FenceVdsVDSCommandParameters:{hostId='1210495a-0680-4f5a-bcd0-345b9debf48c', targetVdsId='169e902e-9993-42c2-ad06-0925d3f217d6', action='STATUS', agent='FenceAgent:{id='null', hostId='null', order='1', type='ipmilan', ip='10.1.46.115', port='623', user='s.oVirt', password='***', encryptOptions='false', options=''}', policy='null'}), log id: 7274de5a 2018-02-19 15:01:26,732+01 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [7ab055b8-7afa-4495-a287-b3b66fd6a81e] EVENT_ID: VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host VIRZ01-101.res01.ads.ooe.local.Internal JSON-RPC error [cid:image001.png at 01D3A993.2380B300] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16035 bytes Desc: image001.png URL: From rightkicktech at gmail.com Mon Feb 19 14:15:24 2018 From: rightkicktech at gmail.com (Alex K) Date: Mon, 19 Feb 2018 16:15:24 +0200 Subject: [ovirt-users] why host is not capable to run HE? In-Reply-To: References: Message-ID: You may try to put host in maintenance then reinstall by deploying engine also. On Feb 19, 2018 1:14 PM, "Artem Tambovskiy" wrote: > Hello, > > Last weekend my cluster suffered form a massive power outage due to human > mistake. > I'm using SHE setup with Gluster, I managed to bring the cluster up > quickly, but once again I have a problem with duplicated host_id ( > https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and > due to this second host is not capable to run HE. > > I manually updated file hosted_engine.conf with correct host_id and > restarted agent & broker - no effect. Than I rebooted the host itself - > still no changes. How to fix this issue? > > Regards, > Artem > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Mon Feb 19 14:22:51 2018 From: rightkicktech at gmail.com (Alex K) Date: Mon, 19 Feb 2018 16:22:51 +0200 Subject: [ovirt-users] WG: IPMI config In-Reply-To: <9D6F18D2AC0D5245BE068C2BEBC06946284F62@msli01-202.res01.ads.ooe.local> References: <9D6F18D2AC0D5245BE068C2BEBC06946284F62@msli01-202.res01.ads.ooe.local> Message-ID: Try putting lanplus=1 at the options and test again On Feb 19, 2018 4:10 PM, wrote: > Hi! > > > > When configuring Powermanagement respectively Fence Agent I get following > error: > > Any idea on this? > > > > 2018-02-19 15:01:26,625+01 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.FenceVdsVDSCommand] (default task-14) > [7ab055b8-7afa-4495-a287-b3b66fd6a81e] START, FenceVdsVDSCommand(HostName > = VIGT01-101.res01.ads.ooe.local, FenceVdsVDSCommandParameters:{ > hostId='1210495a-0680-4f5a-bcd0-345b9debf48c', targetVdsId='169e902e-9993-42c2-ad06-0925d3f217d6', > action='STATUS', agent='FenceAgent:{id='null', hostId='null', order='1', > type='ipmilan', ip='10.1.46.115', port='623', user='s.oVirt', > password='***', encryptOptions='false', options=''}', policy='null'}), log > id: 7274de5a > > 2018-02-19 15:01:26,732+01 WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (default task-14) > [7ab055b8-7afa-4495-a287-b3b66fd6a81e] EVENT_ID: > VDS_ALERT_FENCE_TEST_FAILED(9,001), Power Management test failed for Host > VIRZ01-101.res01.ads.ooe.local.Internal JSON-RPC error > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 16035 bytes Desc: not available URL: From stirabos at redhat.com Mon Feb 19 14:37:30 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 19 Feb 2018 15:37:30 +0100 Subject: [ovirt-users] why host is not capable to run HE? In-Reply-To: References: Message-ID: On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy < artem.tambovskiy at gmail.com> wrote: > Hello, > > Last weekend my cluster suffered form a massive power outage due to human > mistake. > I'm using SHE setup with Gluster, I managed to bring the cluster up > quickly, but once again I have a problem with duplicated host_id ( > https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and > due to this second host is not capable to run HE. > > I manually updated file hosted_engine.conf with correct host_id and > restarted agent & broker - no effect. Than I rebooted the host itself - > still no changes. How to fix this issue? > I'd suggest to run this command on the engine VM: sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c 'select vds_name, vds_spm_id from vds' (just sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id from vds' if still on 4.1) and check /etc/ovirt-hosted-engine/hosted-engine.conf on all the involved host. Maybe you can also have a leftover configuration file on undeployed host. When you find a conflict you should manually bring down sanlock In doubt a reboot of both the hosts will solve for sure. > > Regards, > Artem > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From koehler at luis.uni-hannover.de Thu Feb 15 11:18:42 2018 From: koehler at luis.uni-hannover.de (=?UTF-8?Q?Christoph_K=c3=b6hler?=) Date: Thu, 15 Feb 2018 12:18:42 +0100 Subject: [ovirt-users] oVirt 4.2 with cheph Message-ID: Hello, does someone have experience with cephfs as a vm-storage domain? I think about that but without any hints... Thanks for pointing me... -- Christoph K?hler Leibniz Universit?t IT Services Schlo?wender Stra?e 5, 30159 Hannover Tel.: +49 511 762 794721 koehler at luis.uni-hannover.de http://www.luis.uni-hannover.de/scientific_computing.html From vincent.kwiatkowski at ullink.com Thu Feb 15 15:16:54 2018 From: vincent.kwiatkowski at ullink.com (Vincent Kwiatkowski) Date: Thu, 15 Feb 2018 16:16:54 +0100 Subject: [ovirt-users] issue on engine deployment on oVirt node Message-ID: Hi Folks, I tried a few time to configure a simple oVirt engine on oVirt node. After the fresh install of the node, I connect cockpit and launch the engine setup, then at the end I have the message that I need to connect to the VM with "hosted-engine --console" or via VNC. Via VNC, I can't do nothing, have no prompt using the --console, I have the error: :internal error: character device console0 is not using a PTY What can I do to continue the setup? Thx a lot in advance -- Vincent Kwiatkowski | Production System Engineer |ULLINK | D: + 33 1 44 50 25 45 | T: +1 49 95 30 00 | 23/25 rue de Provence | 75009, Paris | vk at ullink.com Please consider the environment before printing this email -- *The information contained in or attached to this email is strictly confidential. If you are not the intended recipient, please notify us immediately by telephone and return the message to us.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.sanna at trssistemi.com Thu Feb 15 11:27:57 2018 From: simone.sanna at trssistemi.com (simone.sanna at trssistemi.com) Date: Thu, 15 Feb 2018 12:27:57 +0100 Subject: [ovirt-users] How to specify a logical network for ovirt live migration traffic Message-ID: <5A856EBD.2050503@trssistemi.com> Hello to everyone, I have founded the article "How to specify a logical network for RHEV live migration traffic" at https://access.redhat.com/solutions/70412 but i can't read it because i not have an "active Red Hat subscription". It is there an article as "How to specify a logical network for ovirt live migration traffic" or similar? It is possible to do that (for example to specify a nic ethX for live migration traffic between two host)? Many thanks for your replies, Simone From Chris.Yeun at smiths-detection.com Thu Feb 15 23:13:56 2018 From: Chris.Yeun at smiths-detection.com (Yeun, Chris (DNWK)) Date: Thu, 15 Feb 2018 23:13:56 +0000 Subject: [ovirt-users] Moving VMs to another cluster Message-ID: <1518736436742.88456@smiths-detection.com> ?Hello, How do you move a VM to another cluster within the same data center? I have a cluster running ovirt 3.5 nodes. I created another cluster with hosts running CentOS 7 (ovirt 3.6 version) and want to move VMs to this cluster. The compatibility mode for everything is 3.5. I tried shutting down a VM, but I cannot select the other cluster. Also live migration fails as well to the new cluster. Thanks, Chris ______________________________________________________________________ This email has been scanned by the Boundary Defense for Email Security System. For more information please visit http://www.apptix.com/email-security/antispam-virus ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Mon Feb 19 14:55:36 2018 From: msteele at telvue.com (Mark Steele) Date: Mon, 19 Feb 2018 09:55:36 -0500 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: At this point I'm wondering if there is anyone in the community that freelances and would be willing to provide remote support to resolve this issue? We are running with 1/2 our normal hosts, and not being able to add anymore back into the cluster is a serious problem. Best regards, *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Sat, Feb 17, 2018 at 12:53 PM, Mark Steele wrote: > Yaniv, > > I have one of my developers assisting me and we are continuing to run into > issues. This is a note from him: > > Hi, I'm trying to add a host to ovirt, but I'm running into package > dependency problems. I have existing hosts that are working and integrated > properly, and inspecting those, I am able to match the packages between the > new host and the existing, but when I then try to add the new host to > ovirt, it fails on reinstall because it's trying to install packages that > are later versions. does the installation run list from ovirt-release35 > 002-1 have unspecified versions? The working hosts use libvirt-1.1.1-29, > and vdsm-4.16.7, but it's trying to install vdsm-4.16.30, which requires a > higher version of libvirt, at which point, the installation fails. is there > some way I can specify which package versions the ovirt install procedure > uses? or better yet, skip the package management step entirely? > > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > > On Sat, Feb 17, 2018 at 2:32 AM, Yaniv Kaul wrote: > >> >> >> On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele wrote: >> >>> We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine >>> Version: 3.5.0.1-1.el6 >>> >> >> You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , >> which is a result of a default change of libvirt and was fixed in later >> versions of oVirt than the one you are using. >> See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, you >> can probably configure it manually. >> Y. >> >> >>> >>> We have four other hosts that are running this same configuration >>> already. I took one host out of the cluster (forcefully) that was working >>> and now it will not add back in either - throwing the same SASL error. >>> >>> We are looking at downgrading libvirt as I've seen that somewhere else - >>> is there another version of RH I should be trying? I have a host I can put >>> it on. >>> >>> >>> >>> *** >>> *Mark Steele* >>> CIO / VP Technical Operations | TelVue Corporation >>> TelVue - We Share Your Vision >>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>> >>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>> www.telvue.com >>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>> .com/telvue >>> >>> On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: >>> >>>> >>>> >>>> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >>>> >>>> Hello all, >>>> >>>> We recently had a network event where we lost access to our storage for >>>> a period of time. The Cluster basically shut down all our VM's and in the >>>> process we had three HV's that went offline and would not communicate >>>> properly with the cluster. >>>> >>>> We have since completely reinstalled CentOS on the hosts and attempted >>>> to install them into the cluster with no joy. We've gotten to the point >>>> where we generally get an error message in the web gui: >>>> >>>> >>>> Which EL release and which oVirt release are you using? My guess would >>>> be latest EL, with an older oVirt? >>>> Y. >>>> >>>> >>>> Stage: Misc Configuration >>>> Host hv-ausa-02 installation failed. Command returned failure code 1 >>>> during SSH session 'root at 10.1.90.154'. >>>> >>>> the following is what we are seeing in the messages log: >>>> >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>> authentication failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>> 15231: error : virNetSASLSessionListMechanisms:390 : internal error: >>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>> Error -4 in server.c near line 1757) >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>> 15231: error : remoteDispatchAuthSaslInit:3411 : authentication >>>> failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>> Input/output error >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>> authentication failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: >>>> 15233: error : virNetSASLSessionListMechanisms:390 : internal error: >>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>> Error -4 in server.c near line 1757) >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>> 15233: error : remoteDispatchAuthSaslInit:3411 : authentication >>>> failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>> Input/output error >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>> authentication failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call last): >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line >>>> 219, in main >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>>> tool_command[cmd]["command"](*args) >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", >>>> line 83, in upgrade_networks >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in >>>> networks >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>> 159, in get >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line 95, >>>> in _open_qemu_connection >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, >>>> timeout=10, sleep=0.2) >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >>>> libvirtError('virConnectOpenAuth() failed') >>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >>>> failed: authentication failed >>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control >>>> process exited, code=exited status=1 >>>> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >>>> Server Manager network restoration. >>>> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual >>>> Desktop Server Manager. >>>> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed with >>>> result 'dependency'. >>>> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered >>>> failed state. >>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >>>> >>>> Can someone point me in the right direction to resolve this - it seems >>>> to be a SASL issue perhaps? >>>> >>>> *** >>>> *Mark Steele* >>>> CIO / VP Technical Operations | TelVue Corporation >>>> TelVue - We Share Your Vision >>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>> >>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>> www.telvue.com >>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>> .com/telvue >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Mon Feb 19 15:01:53 2018 From: rightkicktech at gmail.com (Alex K) Date: Mon, 19 Feb 2018 17:01:53 +0200 Subject: [ovirt-users] How to specify a logical network for ovirt live migration traffic In-Reply-To: <5A856EBD.2050503@trssistemi.com> References: <5A856EBD.2050503@trssistemi.com> Message-ID: You define your networks then under cluster-> logical networks you select which one will be used for migration. On Feb 19, 2018 4:53 PM, "simone.sanna at trssistemi.com" < simone.sanna at trssistemi.com> wrote: > Hello to everyone, > > I have founded the article "How to specify a logical network for RHEV live > migration traffic" at https://access.redhat.com/solutions/70412 but i > can't read it because i not have an "active Red Hat subscription". > > It is there an article as "How to specify a logical network for ovirt live > migration traffic" or similar? > It is possible to do that (for example to specify a nic ethX for live > migration traffic between two host)? > > Many thanks for your replies, > > Simone > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.tambovskiy at gmail.com Mon Feb 19 15:12:29 2018 From: artem.tambovskiy at gmail.com (Artem Tambovskiy) Date: Mon, 19 Feb 2018 18:12:29 +0300 Subject: [ovirt-users] Fwd: why host is not capable to run HE? In-Reply-To: References: Message-ID: Thanks a lot, Simone! This is clearly shows a problem: [root at ov-eng ovirt-engine]# sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id from vds' vds_name | vds_spm_id -----------------+------------ ovirt1.local | 2 ovirt2.local | 1 (2 rows) While hosted-engine.conf on ovirt1.local have host_id=1, and ovirt2.local host_id=2. So totally opposite values. So how to get this fixed in the simple way? Update the engine DB? Regards, Artem On Mon, Feb 19, 2018 at 5:37 PM, Simone Tiraboschi wrote: > > > On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy < > artem.tambovskiy at gmail.com> wrote: > >> Hello, >> >> Last weekend my cluster suffered form a massive power outage due to human >> mistake. >> I'm using SHE setup with Gluster, I managed to bring the cluster up >> quickly, but once again I have a problem with duplicated host_id ( >> https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and >> due to this second host is not capable to run HE. >> >> I manually updated file hosted_engine.conf with correct host_id and >> restarted agent & broker - no effect. Than I rebooted the host itself - >> still no changes. How to fix this issue? >> > > I'd suggest to run this command on the engine VM: > sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c 'select > vds_name, vds_spm_id from vds' > (just sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id > from vds' if still on 4.1) and check /etc/ovirt-hosted-engine/hosted-engine.conf > on all the involved host. > Maybe you can also have a leftover configuration file on undeployed host. > > When you find a conflict you should manually bring down sanlock > In doubt a reboot of both the hosts will solve for sure. > > > >> >> Regards, >> Artem >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stirabos at redhat.com Mon Feb 19 15:18:21 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 19 Feb 2018 16:18:21 +0100 Subject: [ovirt-users] Fwd: why host is not capable to run HE? In-Reply-To: References: Message-ID: On Mon, Feb 19, 2018 at 4:12 PM, Artem Tambovskiy < artem.tambovskiy at gmail.com> wrote: > > Thanks a lot, Simone! > > This is clearly shows a problem: > > [root at ov-eng ovirt-engine]# sudo -u postgres psql -d engine -c 'select > vds_name, vds_spm_id from vds' > vds_name | vds_spm_id > -----------------+------------ > ovirt1.local | 2 > ovirt2.local | 1 > (2 rows) > > While hosted-engine.conf on ovirt1.local have host_id=1, and ovirt2.local > host_id=2. So totally opposite values. > So how to get this fixed in the simple way? Update the engine DB? > I'd suggest to manually fix /etc/ovirt-hosted-engine/hosted-engine.conf on both the hosts > > Regards, > Artem > > On Mon, Feb 19, 2018 at 5:37 PM, Simone Tiraboschi > wrote: > >> >> >> On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy < >> artem.tambovskiy at gmail.com> wrote: >> >>> Hello, >>> >>> Last weekend my cluster suffered form a massive power outage due to >>> human mistake. >>> I'm using SHE setup with Gluster, I managed to bring the cluster up >>> quickly, but once again I have a problem with duplicated host_id ( >>> https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and >>> due to this second host is not capable to run HE. >>> >>> I manually updated file hosted_engine.conf with correct host_id and >>> restarted agent & broker - no effect. Than I rebooted the host itself - >>> still no changes. How to fix this issue? >>> >> >> I'd suggest to run this command on the engine VM: >> sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c 'select >> vds_name, vds_spm_id from vds' >> (just sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id >> from vds' if still on 4.1) and check /etc/ovirt-hosted-engine/hosted-engine.conf >> on all the involved host. >> Maybe you can also have a leftover configuration file on undeployed host. >> >> When you find a conflict you should manually bring down sanlock >> In doubt a reboot of both the hosts will solve for sure. >> >> >> >>> >>> Regards, >>> Artem >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.tambovskiy at gmail.com Mon Feb 19 15:40:02 2018 From: artem.tambovskiy at gmail.com (Artem Tambovskiy) Date: Mon, 19 Feb 2018 18:40:02 +0300 Subject: [ovirt-users] Fwd: why host is not capable to run HE? In-Reply-To: References: Message-ID: Ok, understood. Once I set correct host_id on both hosts how to take changes in force? With minimal downtime? Or i need reboot both hosts anyway? Regards, Artem 19 ????. 2018 ?. 18:18 ???????????? "Simone Tiraboschi" ???????: > > > On Mon, Feb 19, 2018 at 4:12 PM, Artem Tambovskiy < > artem.tambovskiy at gmail.com> wrote: > >> >> Thanks a lot, Simone! >> >> This is clearly shows a problem: >> >> [root at ov-eng ovirt-engine]# sudo -u postgres psql -d engine -c 'select >> vds_name, vds_spm_id from vds' >> vds_name | vds_spm_id >> -----------------+------------ >> ovirt1.local | 2 >> ovirt2.local | 1 >> (2 rows) >> >> While hosted-engine.conf on ovirt1.local have host_id=1, and >> ovirt2.local host_id=2. So totally opposite values. >> So how to get this fixed in the simple way? Update the engine DB? >> > > I'd suggest to manually fix /etc/ovirt-hosted-engine/hosted-engine.conf > on both the hosts > > >> >> Regards, >> Artem >> >> On Mon, Feb 19, 2018 at 5:37 PM, Simone Tiraboschi >> wrote: >> >>> >>> >>> On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy < >>> artem.tambovskiy at gmail.com> wrote: >>> >>>> Hello, >>>> >>>> Last weekend my cluster suffered form a massive power outage due to >>>> human mistake. >>>> I'm using SHE setup with Gluster, I managed to bring the cluster up >>>> quickly, but once again I have a problem with duplicated host_id ( >>>> https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host >>>> and due to this second host is not capable to run HE. >>>> >>>> I manually updated file hosted_engine.conf with correct host_id and >>>> restarted agent & broker - no effect. Than I rebooted the host itself - >>>> still no changes. How to fix this issue? >>>> >>> >>> I'd suggest to run this command on the engine VM: >>> sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c >>> 'select vds_name, vds_spm_id from vds' >>> (just sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id >>> from vds' if still on 4.1) and check /etc/ovirt-hosted-engine/hosted-engine.conf >>> on all the involved host. >>> Maybe you can also have a leftover configuration file on undeployed host. >>> >>> When you find a conflict you should manually bring down sanlock >>> In doubt a reboot of both the hosts will solve for sure. >>> >>> >>> >>>> >>>> Regards, >>>> Artem >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Mon Feb 19 15:40:31 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Mon, 19 Feb 2018 16:40:31 +0100 Subject: [ovirt-users] Moving VMs to another cluster In-Reply-To: <1518736436742.88456@smiths-detection.com> References: <1518736436742.88456@smiths-detection.com> Message-ID: <77361061-2974-43CB-A35E-3D55489E4CFF@redhat.com> > On 16 Feb 2018, at 00:13, Yeun, Chris (DNWK) wrote: > > ?Hello, > > How do you move a VM to another cluster within the same data center? I have a cluster running ovirt 3.5 nodes. I created another cluster with hosts running CentOS 7 (ovirt 3.6 version) and want to move VMs to this cluster. The compatibility mode for everything is 3.5. > > I tried shutting down a VM, but I cannot select the other cluster. that should work, just edit the VM and move to a different cluster. Does it give any reason why you cannot do that? > Also live migration fails as well to the new cluster. yeah, that should not work. How exactly does it fail to migrate? I guess you?re using the migration dialog and migrate to the new cluster (that?s removed/hidden in 4.1) It?s going to fail with a specific error/reason (might be missing network or mismatch in cluster settings). Such error would only be in vdsm.log somewhere... Thanks, michal > > Thanks, > Chris > > ______________________________________________________________________ > This email has been scanned by the Boundary Defense for Email Security System. For more information please visit http://www.apptix.com/email-security/antispam-virus > ______________________________________________________________________ > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Mon Feb 19 15:45:07 2018 From: msivak at redhat.com (Martin Sivak) Date: Mon, 19 Feb 2018 16:45:07 +0100 Subject: [ovirt-users] Fwd: why host is not capable to run HE? In-Reply-To: References: Message-ID: Hi Artem, just a restart of ovirt-ha-agent services should be enough. Best regards Martin Sivak On Mon, Feb 19, 2018 at 4:40 PM, Artem Tambovskiy wrote: > Ok, understood. > Once I set correct host_id on both hosts how to take changes in force? With > minimal downtime? Or i need reboot both hosts anyway? > > Regards, > Artem > > 19 ????. 2018 ?. 18:18 ???????????? "Simone Tiraboschi" > ???????: > >> >> >> On Mon, Feb 19, 2018 at 4:12 PM, Artem Tambovskiy >> wrote: >>> >>> >>> Thanks a lot, Simone! >>> >>> This is clearly shows a problem: >>> >>> [root at ov-eng ovirt-engine]# sudo -u postgres psql -d engine -c 'select >>> vds_name, vds_spm_id from vds' >>> vds_name | vds_spm_id >>> -----------------+------------ >>> ovirt1.local | 2 >>> ovirt2.local | 1 >>> (2 rows) >>> >>> While hosted-engine.conf on ovirt1.local have host_id=1, and ovirt2.local >>> host_id=2. So totally opposite values. >>> So how to get this fixed in the simple way? Update the engine DB? >> >> >> I'd suggest to manually fix /etc/ovirt-hosted-engine/hosted-engine.conf on >> both the hosts >> >>> >>> >>> Regards, >>> Artem >>> >>> On Mon, Feb 19, 2018 at 5:37 PM, Simone Tiraboschi >>> wrote: >>>> >>>> >>>> >>>> On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy >>>> wrote: >>>>> >>>>> Hello, >>>>> >>>>> Last weekend my cluster suffered form a massive power outage due to >>>>> human mistake. >>>>> I'm using SHE setup with Gluster, I managed to bring the cluster up >>>>> quickly, but once again I have a problem with duplicated host_id >>>>> (https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second host and due >>>>> to this second host is not capable to run HE. >>>>> >>>>> I manually updated file hosted_engine.conf with correct host_id and >>>>> restarted agent & broker - no effect. Than I rebooted the host itself - >>>>> still no changes. How to fix this issue? >>>> >>>> >>>> I'd suggest to run this command on the engine VM: >>>> sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c >>>> 'select vds_name, vds_spm_id from vds' >>>> (just sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id >>>> from vds' if still on 4.1) and check >>>> /etc/ovirt-hosted-engine/hosted-engine.conf on all the involved host. >>>> Maybe you can also have a leftover configuration file on undeployed >>>> host. >>>> >>>> When you find a conflict you should manually bring down sanlock >>>> In doubt a reboot of both the hosts will solve for sure. >>>> >>>> >>>>> >>>>> >>>>> Regards, >>>>> Artem >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>> >>> >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From artem.tambovskiy at gmail.com Mon Feb 19 18:32:31 2018 From: artem.tambovskiy at gmail.com (Artem Tambovskiy) Date: Mon, 19 Feb 2018 21:32:31 +0300 Subject: [ovirt-users] Fwd: Fwd: why host is not capable to run HE? In-Reply-To: References: Message-ID: Thanks Martin. As you suggested I updated hosted-engine.conf with correct host_id values and restarted ovirt-ha-agent services on both hosts and now I run into the problem with status "unknown-stale-data" :( And second host still doesn't looks as capable to run HE. Should I stop HE VM, bring down ovirt-ha-agents and reinitialize-lockspace and start ovirt-ha-agents again? Regards, Artem On Mon, Feb 19, 2018 at 6:45 PM, Martin Sivak wrote: > Hi Artem, > > just a restart of ovirt-ha-agent services should be enough. > > Best regards > > Martin Sivak > > On Mon, Feb 19, 2018 at 4:40 PM, Artem Tambovskiy > wrote: > > Ok, understood. > > Once I set correct host_id on both hosts how to take changes in force? > With > > minimal downtime? Or i need reboot both hosts anyway? > > > > Regards, > > Artem > > > > 19 ????. 2018 ?. 18:18 ???????????? "Simone Tiraboschi" > > ???????: > > > >> > >> > >> On Mon, Feb 19, 2018 at 4:12 PM, Artem Tambovskiy > >> wrote: > >>> > >>> > >>> Thanks a lot, Simone! > >>> > >>> This is clearly shows a problem: > >>> > >>> [root at ov-eng ovirt-engine]# sudo -u postgres psql -d engine -c 'select > >>> vds_name, vds_spm_id from vds' > >>> vds_name | vds_spm_id > >>> -----------------+------------ > >>> ovirt1.local | 2 > >>> ovirt2.local | 1 > >>> (2 rows) > >>> > >>> While hosted-engine.conf on ovirt1.local have host_id=1, and > ovirt2.local > >>> host_id=2. So totally opposite values. > >>> So how to get this fixed in the simple way? Update the engine DB? > >> > >> > >> I'd suggest to manually fix /etc/ovirt-hosted-engine/hosted-engine.conf > on > >> both the hosts > >> > >>> > >>> > >>> Regards, > >>> Artem > >>> > >>> On Mon, Feb 19, 2018 at 5:37 PM, Simone Tiraboschi < > stirabos at redhat.com> > >>> wrote: > >>>> > >>>> > >>>> > >>>> On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy > >>>> wrote: > >>>>> > >>>>> Hello, > >>>>> > >>>>> Last weekend my cluster suffered form a massive power outage due to > >>>>> human mistake. > >>>>> I'm using SHE setup with Gluster, I managed to bring the cluster up > >>>>> quickly, but once again I have a problem with duplicated host_id > >>>>> (https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second > host and due > >>>>> to this second host is not capable to run HE. > >>>>> > >>>>> I manually updated file hosted_engine.conf with correct host_id and > >>>>> restarted agent & broker - no effect. Than I rebooted the host > itself - > >>>>> still no changes. How to fix this issue? > >>>> > >>>> > >>>> I'd suggest to run this command on the engine VM: > >>>> sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c > >>>> 'select vds_name, vds_spm_id from vds' > >>>> (just sudo -u postgres psql -d engine -c 'select vds_name, vds_spm_id > >>>> from vds' if still on 4.1) and check > >>>> /etc/ovirt-hosted-engine/hosted-engine.conf on all the involved host. > >>>> Maybe you can also have a leftover configuration file on undeployed > >>>> host. > >>>> > >>>> When you find a conflict you should manually bring down sanlock > >>>> In doubt a reboot of both the hosts will solve for sure. > >>>> > >>>> > >>>>> > >>>>> > >>>>> Regards, > >>>>> Artem > >>>>> > >>>>> _______________________________________________ > >>>>> Users mailing list > >>>>> Users at ovirt.org > >>>>> http://lists.ovirt.org/mailman/listinfo/users > >>>>> > >>>> > >>> > >>> > >>> > >>> _______________________________________________ > >>> Users mailing list > >>> Users at ovirt.org > >>> http://lists.ovirt.org/mailman/listinfo/users > >>> > >> > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From O.Dietzel at rto.de Mon Feb 19 21:18:45 2018 From: O.Dietzel at rto.de (Oliver Dietzel) Date: Mon, 19 Feb 2018 21:18:45 +0000 Subject: [ovirt-users] Setup ovirt-guest-agent from tarball possible? Message-ID: Hi, i try to install ovirt-guest-agent on a Clearlinux vm (already up and running in our ovirt test cluster). The usual fedora / el7 rpm's do not work. Is it possible to install ovirt-guest-agent from a tarball? Or do i have to rebuild a src rpm? And where do i find the latest tarball and src rpm of this package / these packages? Any help appreciated, thx in advance Oli ___________________________________________________________ Oliver Dietzel RTO GmbH Hanauer Landstra?e 439 60314 Frankfurt From jas at cse.yorku.ca Mon Feb 19 22:36:09 2018 From: jas at cse.yorku.ca (Jason Keltz) Date: Mon, 19 Feb 2018 17:36:09 -0500 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: <2EA3B6C6-C3E8-45D8-8ED4-4DF0AE97D279@redhat.com> References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> <2EA3B6C6-C3E8-45D8-8ED4-4DF0AE97D279@redhat.com> Message-ID: Hi Michal, On 2/15/2018 12:05 PM, Michal Skrivanek wrote: >> On 15 Feb 2018, at 16:37, Jason Keltz wrote: >> >> On 02/15/2018 08:48 AM, nicolas at devels.es wrote: >>> Hi, >>> >>> We upgraded one of our infrastructures to 4.2.0 recently and since then some of our machines have the "Console" button greyed-out in the Admin UI, like they were disabled. >>> >>> I changed their compatibility to 4.2 but with no luck, as they're still disabled. >>> >>> Is there a way to know why is that, and how to solve it? >>> >>> I'm attaching a screenshot. >> Hi Nicolas. >> I had the same problem with most of my VMs after the upgrade from 4.1 to 4.2. >> See bugzilla here: https://bugzilla.redhat.com/show_bug.cgi?id=1528868 >> (which admittedly was a mesh of a bunch of different issues that occurred) > yeah, that?s not a good idea to mix more issues:) > Seems https://bugzilla.redhat.com/show_bug.cgi?id=1528868#c26 is the last one relevant to the grayed out console problem in this email thread. > > it?s also possible to check "VM Devices? subtab and list the graphical devices. If this is the same problem as from Nicolas then it would list cirrus and it would be great if you can confirm the conditionas are similar (i.e. originally a 3.6 VM) I believe it was originally a 3.6 VM. Is there anywhere I can verify this info? If not, it would be helpful if oVirt kept track of the version that created the VM for cases just like this. VM Device subtab: (no Cirrus) > And then - if possible - describe some history of what happened. When was the VM created, when was cluster updated, when the system was upgraded and to what versions. All I know is that everything was working fine, then I updated to 4.2, updated cluster version, and then most of my consoles were not available. I can't remember if this happened before the cluster upgrade or not. I suspect it was most and not all VMs since some of them had been created later than 3.6, and this was an older one. I only have this one VM left in this state because I had deleted the other VMs and recreated them one at a time... I will wait to see if you want me to try Vineet's solution of making it headless, > The before bringing it back up, unchecked headless in the VM > > We then had to do a Run-Once which failed > Then did a normal Run. > > Console was available, and all hardware came back fine. > ... but I won't try that yet in case you need additional information from the VM first. Jason. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: eadpakjggeififdb.png Type: image/png Size: 43283 bytes Desc: not available URL: From mhke_aj5566 at yahoo.com Tue Feb 20 00:59:05 2018 From: mhke_aj5566 at yahoo.com (michael pagdanganan) Date: Tue, 20 Feb 2018 00:59:05 +0000 (UTC) Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> Message-ID: <1021521808.1848498.1519088345208@mail.yahoo.com> My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing for maintenance. When I tried to activate my old master domain (DATANd01 on Node1 ) all? storage domain goes down and up master keep on rotating. Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) Event Error:Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain Stored2 is marked as master in ovirt engine Database but not on storage side Please consult with support VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or it's version: u'SD=f3e372e3-1251-4195-a4b9-1027e40059df, pool=5a865884-0366-0330-02b8-0000 VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) Here's logs from engine: -------------------------------------------------------------------------------------------------------[root at dev2engine ~]# tail /var/log/messages Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. -------------------------------------------------------------------------------------------------------[root at dev2engine ~]# tail /var/log/ovirt-engine/engine.log 2018-02-20 08:01:16,062+08 INFO? [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-32) [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event queue 2018-02-20 08:01:27,825+08 WARN? [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7-thread-23) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:27,862+08 WARN? [org.ovirt.engine.core.bll.storage.pool.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-23) [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:27,882+08 INFO? [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-20) [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event queue 2018-02-20 08:01:40,106+08 WARN? [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7-thread-17) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:40,197+08 WARN? [org.ovirt.engine.core.bll.storage.pool.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-17) [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:40,246+08 INFO? [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-22) [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event queue 2018-02-20 08:01:51,809+08 WARN? [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7-thread-26) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:51,846+08 WARN? [org.ovirt.engine.core.bll.storage.pool.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-26) [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:51,866+08 INFO? [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7-thread-49) [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event queue -------------- next part -------------- An HTML attachment was scrubbed... URL: From ishaby at redhat.com Tue Feb 20 05:54:26 2018 From: ishaby at redhat.com (Idan Shaby) Date: Tue, 20 Feb 2018 07:54:26 +0200 Subject: [ovirt-users] Disk image upload pausing In-Reply-To: <20180219091707.5A406E2266@smtp01.mail.de> References: <20180219091707.5A406E2266@smtp01.mail.de> Message-ID: Hi, Can you please attach the engine, vdsm, daemon and proxy logs? Regards, Idan On Mon, Feb 19, 2018 at 11:17 AM, wrote: > > Hi, > I am trying to build a new vm based on a vhd image coming from a windows > machine. I converted the image to raw, and I am now trying to import it in > the engine. > After setting up the CA in my browser, the import process starts but stops > after a while with "paused by system" status. I can resume it, but it > pauses without transferring more. > The engine logs don't explain much, I see a line for the start and the > next one for the pause. > My network seems to work correctly, and I have plenty of space in the > storage domain. > What can cause the process to pause ? > Regards > > ------------------------------ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Feb 20 06:09:41 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Feb 2018 08:09:41 +0200 Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: <1021521808.1848498.1519088345208@mail.yahoo.com> References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> Message-ID: Hi, Can you please attach full Engine and VDSM logs? Thanks, On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan wrote: > My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on > Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing > for maintenance. When I tried to activate my old master domain (DATANd01 > on Node1 ) all storage domain goes down and up master keep on rotating. > > > Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) > > > Event Error: > Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain > Stored2 is marked as master in ovirt engine Database but not on storage > side Please consult with support > > VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or > it's version: u'SD=f3e372e3-1251-4195-a4b9-1027e40059df, > pool=5a865884-0366-0330-02b8-0000 > > VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () > > Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) > > Here's logs from engine: > > ------------------------------------------------------------ > ------------------------------------------- > [root at dev2engine ~]# tail /var/log/messages > Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. > Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. > Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. > Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. > Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. > Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. > Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. > Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. > Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. > Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. > > > ------------------------------------------------------------ > ------------------------------------------- > [root at dev2engine ~]# tail /var/log/ovirt-engine/engine.log > 2018-02-20 08:01:16,062+08 INFO [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] > (org.ovirt.thread.pool-7-thread-32) [102e9d3c] Finished reconstruct for > pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event queue > 2018-02-20 08:01:27,825+08 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] > (org.ovirt.thread.pool-7-thread-23) [] Master domain is not in sync > between DB and VDSM. Domain Stored2 marked as master in DB and not in the > storage > 2018-02-20 08:01:27,862+08 WARN [org.ovirt.engine.core.bll.storage.pool. > ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-23) > [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:27,882+08 INFO [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] > (org.ovirt.thread.pool-7-thread-20) [929330e] Finished reconstruct for > pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event queue > 2018-02-20 08:01:40,106+08 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] > (org.ovirt.thread.pool-7-thread-17) [] Master domain is not in sync > between DB and VDSM. Domain Stored2 marked as master in DB and not in the > storage > 2018-02-20 08:01:40,197+08 WARN [org.ovirt.engine.core.bll.storage.pool. > ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-17) > [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:40,246+08 INFO [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] > (org.ovirt.thread.pool-7-thread-22) [73673040] Finished reconstruct for > pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event queue > 2018-02-20 08:01:51,809+08 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] > (org.ovirt.thread.pool-7-thread-26) [] Master domain is not in sync > between DB and VDSM. Domain Stored2 marked as master in DB and not in the > storage > 2018-02-20 08:01:51,846+08 WARN [org.ovirt.engine.core.bll.storage.pool. > ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-26) > [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:51,866+08 INFO [org.ovirt.engine.core.bll.eventqueue.EventQueueMonitor] > (org.ovirt.thread.pool-7-thread-49) [2c11a866] Finished reconstruct for > pool '5a865884-0366-0330-02b8-0000000002d4'. Clearing event queue > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhke_aj5566 at yahoo.com Tue Feb 20 06:43:07 2018 From: mhke_aj5566 at yahoo.com (michael pagdanganan) Date: Tue, 20 Feb 2018 06:43:07 +0000 (UTC) Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: <1101288079.1992124.1519108431417@mail.yahoo.com> References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> <1101288079.1992124.1519108431417@mail.yahoo.com> Message-ID: <1802931026.2010356.1519108987398@mail.yahoo.com> Sorry can't attached log file it's too big file VDSM.log for node 1 2766', 'lastCheck': '4.9', 'valid': True}} from=internal, task_id=645d456e-f59f-4b1c-9e97-fc82d19a36b1 (api:52) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68-a9fe-a00006be268f (api:46) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00022988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000272766', 'lastCheck': '6.0', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68-a9fe-a00006be268f (api:52) 2018-02-20 14:38:47,226+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:55,252+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:58,566+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=04f55ded-5841-44ae-a376-4f6e723e4b10 (api:46) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', 'lastCheck': '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000197535', 'lastCheck': '9.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000191456', 'lastCheck': '9.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00026365', 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=internal, task_id=04f55ded-5841-44ae-a376-4f6e723e4b10 (api:52) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4-ad09-024ec27f7443 (api:46) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck': '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000222609', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000211253', 'lastCheck': '1.0', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '1.1', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4-ad09-024ec27f7443 (api:52) 2018-02-20 14:39:02,300+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:39:10,270+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,631+0800 INFO? (jsonrpc/0) [vdsm.api] START connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', conList=[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7-e732d0fe3123', u'connection': u'dev2node1.lares.com.ph:/run/media/root/Slave1Data/dataNode1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13-aaaa-7a66aaf44429 (api:46) 2018-02-20 14:39:13,633+0800 INFO? (jsonrpc/0) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'5e4c94f1-f3b9-4fbd-a6c7-e732d0fe3123'}]} from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13-aaaa-7a66aaf44429 (api:52) 2018-02-20 14:39:13,634+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StoragePool.connectStorageServer succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,806+0800 INFO? (jsonrpc/7) [vdsm.api] START connectStoragePool(spUUID=u'5a865884-0366-0330-02b8-0000000002d4', hostID=1, msdUUID=u'f3e372e3-1251-4195-a4b9-1027e40059df', masterVersion=65, domainsMap={u'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': u'active', u'4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': u'active', u'225e1975-8121-4370-b317-86e964ae326f': u'attached', u'f3e372e3-1251-4195-a4b9-1027e40059df': u'active', u'65ca2e2d-b472-4bee-85b4-09a161464b20': u'active', u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'active'}, options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=927a5d9a-4304-4776-b5c9-22ba5d0cb853 (api:46) 2018-02-20 14:39:13,807+0800 INFO? (jsonrpc/7) [storage.StoragePoolMemoryBackend] new storage pool master version 65 and domains map {u'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': u'Active', u'4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': u'Active', u'225e1975-8121-4370-b317-86e964ae326f': u'Attached', u'f3e372e3-1251-4195-a4b9-1027e40059df': u'Active', u'65ca2e2d-b472-4bee-85b4-09a161464b20': u'Active', u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'Active'} (spbackends:450) VDSM.log 22018-02-20 14:41:14,598+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.5567e-05', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.2433e-05', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000237247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=6213712b-9903-4db8-9836-3baf85cd63e4 (api:52) 2018-02-20 14:41:18,074+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:24,333+0800 INFO? (jsonrpc/4) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f-af54-4cf2b06ee98a (api:46) 2018-02-20 14:41:24,334+0800 INFO? (jsonrpc/4) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck': '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000117677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1467e-05', 'lastCheck': '5.8', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f-af54-4cf2b06ee98a (api:52) 2018-02-20 14:41:24,338+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:27,087+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:29,607+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=f14b7aab-b64a-4903-9368-d665e39b49d1 (api:46) 2018-02-20 14:41:29,608+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', 'lastCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '8.8773e-05', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.7145e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=internal, task_id=f14b7aab-b64a-4903-9368-d665e39b49d1 (api:52) 2018-02-20 14:41:33,079+0800 INFO? (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:40,456+0800 INFO? (jsonrpc/2) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882-a742-fbd56ccb4037 (api:46) 2018-02-20 14:41:40,457+0800 INFO? (jsonrpc/2) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '1.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '1.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '1.9', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882-a742-fbd56ccb4037 (api:52) 2018-02-20 14:41:40,461+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:41:42,106+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] START repoStats(options=None) from=internal, task_id=20e0b29b-d3cd-4e44-b92c-213f9c984ab2 (api:46) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=20e0b29b-d3cd-4e44-b92c-213f9c984ab2 (api:52) 2018-02-20 14:41:49,083+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) Engine log skId '404ccecc-aa7f-45ea-89e4-726956269bc9' task status 'finished' 2018-02-20 14:42:21,966+08 INFO? [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] spmStart polling ended, spm status: SPM 2018-02-20 14:42:21,967+08 INFO? [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] START, HSMClearTaskVDSCommand(HostName = Node1, HSMTaskGuidBaseVDSCommandParameters:{runAsync='true', hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', taskId='404ccecc-aa7f-45ea-89e4-726956269bc9'}), log id: 71688f70 2018-02-20 14:42:22,922+08 INFO? [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f70 2018-02-20 14:42:22,923+08 INFO? [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common.businessentities.SpmStatusResult at 78332453, log id: 3ea35d5 2018-02-20 14:42:22,935+08 INFO? [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7-thread-38) [29528f9] Initialize Irs proxy from vds: dev2node1.lares.com.ph 2018-02-20 14:42:22,951+08 INFO? [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-38) [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host Node1 (Address: dev2node1.lares.com.ph). 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] -- executeIrsBrokerCommand: Attempting on storage pool '5a865884-0366-0330-02b8-0000000002d4' 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] START, HSMGetAllTasksInfoVDSCommand(HostName = Node1, VdsIdVDSCommandParametersBase:{runAsync='true', hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4'}), log id: 1bdbea9d 2018-02-20 14:42:22,955+08 INFO? [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-7) [29528f9] START, SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{runAsync='true', storagePoolId='5a865884-0366-0330-02b8-0000000002d4', ignoreFailoverLimit='false'}), log id: 5c2422d6 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core.vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] (org.ovirt.thread.pool-7-thread-38) [29528f9] Discovered no tasks on Storage Pool 'UnsecuredEnv' 2018-02-20 14:42:24,936+08 INFO? [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.ph/10.10.43.2 2018-02-20 14:42:27,012+08 WARN? [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7-thread-43) [] Master domain is not in sync between DB and VDSM. Domain Node1Container marked as master in DB and not in the storage 2018-02-20 14:42:27,026+08 WARN? [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-43) [] EVENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_SYNC(990), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Sync Error on Master Domain between Host Node1 and oVirt Engine. Domain: Node1Container is marked as Master in oVirt Engine database but not on the Storage side. Please consult with Support on how to fix this issue. 2018-02-20 14:42:27,103+08 INFO? [org.ovirt.engine.core.bll.storage.pool.ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] Running command: ReconstructMasterDomainCommand internal: true. Entities affected :? ID: f3e372e3-1251-4195-a4b9-1027e40059df Type: Storage 2018-02-20 14:42:27,137+08 INFO? [org.ovirt.engine.core.vdsbroker.irsbroker.ResetIrsVDSCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] START, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{runAsync='true', storagePoolId='5a865884-0366-0330-02b8-0000000002d4', ignoreFailoverLimit='false', vdsId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', ignoreStopFailed='true'}), log id: 3e0a239d 2018-02-20 14:42:27,140+08 INFO? [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] START, SpmStopVDSCommand(HostName = Node1, SpmStopVDSCommandParameters:{runAsync='true', hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', storagePoolId='5a865884-0366-0330-02b8-0000000002d4'}), log id: 7c67bf06 2018-02-20 14:42:28,144+08 INFO? [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', pool id '5a865884-0366-0330-02b8-0000000002d4' On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan wrote: Thanks for quick response, see attachment. On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky wrote: Hi,? Can you please attach full Engine and VDSM logs? Thanks, On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan wrote: My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing for maintenance. When I tried to activate my old master domain (DATANd01 on Node1 ) all? storage domain goes down and up master keep on rotating. Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) Event Error:Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain Stored2 is marked as master in ovirt engine Database but not on storage side Please consult with support VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or it's version: u'SD=f3e372e3-1251-4195-a4b9- 1027e40059df, pool=5a865884-0366-0330-02b8- 0000 VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) Here's logs from engine: ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/messages Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/ovirt-engine/engine. log 2018-02-20 08:01:16,062+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:27,825+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:27,862+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:27,882+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:40,106+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:40,197+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:40,246+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:51,809+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:51,846+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:51,866+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users -- Regards,Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Feb 20 06:51:49 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Feb 2018 08:51:49 +0200 Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: <1802931026.2010356.1519108987398@mail.yahoo.com> References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> <1101288079.1992124.1519108431417@mail.yahoo.com> <1802931026.2010356.1519108987398@mail.yahoo.com> Message-ID: Please try to compress the logs maybe it will help. On Tue, Feb 20, 2018 at 8:43 AM, michael pagdanganan wrote: > Sorry can't attached log file it's too big file > > > VDSM.log for node 1 > > 2766', 'lastCheck': '4.9', 'valid': True}} from=internal, > task_id=645d456e-f59f-4b1c-9e97-fc82d19a36b1 (api:52) > 2018-02-20 14:38:47,222+0800 INFO (jsonrpc/3) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, > task_id=28c795d1-1639-4e68-a9fe-a00006be268f (api:46) > 2018-02-20 14:38:47,222+0800 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck': > '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': > 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', > 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.00022988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000272766', 'lastCheck': '6.0', 'valid': True}} > from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68-a9fe-a00006be268f > (api:52) > 2018-02-20 14:38:47,226+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:38:55,252+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:38:58,566+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:39:01,093+0800 INFO (periodic/1) [vdsm.api] START > repoStats(options=None) from=internal, task_id=04f55ded-5841-44ae-a376-4f6e723e4b10 > (api:46) > 2018-02-20 14:39:01,093+0800 INFO (periodic/1) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', > 'lastCheck': '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000197535', 'lastCheck': '9.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000191456', 'lastCheck': '9.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.00026365', 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=internal, > task_id=04f55ded-5841-44ae-a376-4f6e723e4b10 (api:52) > 2018-02-20 14:39:02,295+0800 INFO (jsonrpc/6) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=13149d6e, > task_id=5f19d76a-d343-4dc4-ad09-024ec27f7443 (api:46) > 2018-02-20 14:39:02,295+0800 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck': > '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': > 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', > 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000222609', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000211253', 'lastCheck': '1.0', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000243658', 'lastCheck': '1.1', 'valid': True}} > from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4-ad09-024ec27f7443 > (api:52) > 2018-02-20 14:39:02,300+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.01 seconds (__init__:539) > 2018-02-20 14:39:10,270+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:39:13,631+0800 INFO (jsonrpc/0) [vdsm.api] START > connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', > conList=[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7-e732d0fe3123', u'connection': > u'dev2node1.lares.com.ph:/run/media/root/Slave1Data/dataNode1', u'iqn': > u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', > u'password': '********', u'port': u''}], options=None) > from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13-aaaa-7a66aaf44429 > (api:46) > 2018-02-20 14:39:13,633+0800 INFO (jsonrpc/0) [vdsm.api] FINISH > connectStorageServer return={'statuslist': [{'status': 0, 'id': > u'5e4c94f1-f3b9-4fbd-a6c7-e732d0fe3123'}]} from=::ffff:10.10.43.1,60554, > flow_id=15b57417, task_id=bfce1e70-ef4a-4e13-aaaa-7a66aaf44429 (api:52) > 2018-02-20 14:39:13,634+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC > call StoragePool.connectStorageServer succeeded in 0.00 seconds > (__init__:539) > 2018-02-20 14:39:13,806+0800 INFO (jsonrpc/7) [vdsm.api] START > connectStoragePool(spUUID=u'5a865884-0366-0330-02b8-0000000002d4', > hostID=1, msdUUID=u'f3e372e3-1251-4195-a4b9-1027e40059df', > masterVersion=65, domainsMap={u'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > u'active', u'4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': u'active', > u'225e1975-8121-4370-b317-86e964ae326f': u'attached', > u'f3e372e3-1251-4195-a4b9-1027e40059df': u'active', > u'65ca2e2d-b472-4bee-85b4-09a161464b20': u'active', > u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'active'}, options=None) > from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=927a5d9a-4304-4776-b5c9-22ba5d0cb853 > (api:46) > 2018-02-20 14:39:13,807+0800 INFO (jsonrpc/7) [storage.StoragePoolMemoryBackend] > new storage pool master version 65 and domains map > {u'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': u'Active', > u'4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': u'Active', > u'225e1975-8121-4370-b317-86e964ae326f': u'Attached', > u'f3e372e3-1251-4195-a4b9-1027e40059df': u'Active', > u'65ca2e2d-b472-4bee-85b4-09a161464b20': u'Active', > u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'Active'} (spbackends:450) > > VDSM.log 2 > 2018-02-20 14:41:14,598+0800 INFO (periodic/3) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', > 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '7.5567e-05', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '8.2433e-05', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000237247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, > task_id=6213712b-9903-4db8-9836-3baf85cd63e4 (api:52) > 2018-02-20 14:41:18,074+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:24,333+0800 INFO (jsonrpc/4) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=73f86113, > task_id=e8231ecb-3543-4d8f-af54-4cf2b06ee98a (api:46) > 2018-02-20 14:41:24,334+0800 INFO (jsonrpc/4) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck': > '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': > 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', > 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000117677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.1467e-05', 'lastCheck': '5.8', 'valid': True}} > from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f-af54-4cf2b06ee98a > (api:52) > 2018-02-20 14:41:24,338+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:27,087+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:29,607+0800 INFO (periodic/1) [vdsm.api] START > repoStats(options=None) from=internal, task_id=f14b7aab-b64a-4903-9368-d665e39b49d1 > (api:46) > 2018-02-20 14:41:29,608+0800 INFO (periodic/1) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', > 'lastCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '8.8773e-05', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.7145e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=internal, > task_id=f14b7aab-b64a-4903-9368-d665e39b49d1 (api:52) > 2018-02-20 14:41:33,079+0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:40,456+0800 INFO (jsonrpc/2) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=3be8150c, > task_id=1b0c86a8-6fd8-4882-a742-fbd56ccb4037 (api:46) > 2018-02-20 14:41:40,457+0800 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': > '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': > 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', > 'lastCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000145955', 'lastCheck': '1.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000209871', 'lastCheck': '1.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.5744e-05', 'lastCheck': '1.9', 'valid': True}} > from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882-a742-fbd56ccb4037 > (api:52) > 2018-02-20 14:41:40,461+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.01 seconds (__init__:539) > 2018-02-20 14:41:42,106+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:44,622+0800 INFO (periodic/3) [vdsm.api] START > repoStats(options=None) from=internal, task_id=20e0b29b-d3cd-4e44-b92c-213f9c984ab2 > (api:46) > 2018-02-20 14:41:44,622+0800 INFO (periodic/3) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', > 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.00011454', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000145955', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000209871', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, > task_id=20e0b29b-d3cd-4e44-b92c-213f9c984ab2 (api:52) > 2018-02-20 14:41:49,083+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > > > Engine log > > skId '404ccecc-aa7f-45ea-89e4-726956269bc9' task status 'finished' > 2018-02-20 14:42:21,966+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-7-thread-38) > [29528f9] spmStart polling ended, spm status: SPM > 2018-02-20 14:42:21,967+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-38) > [29528f9] START, HSMClearTaskVDSCommand(HostName = Node1, > HSMTaskGuidBaseVDSCommandParameters:{runAsync='true', > hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', taskId='404ccecc-aa7f-45ea-89e4-726956269bc9'}), > log id: 71688f70 > 2018-02-20 14:42:22,922+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-38) > [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f70 > 2018-02-20 14:42:22,923+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-7-thread-38) > [29528f9] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common. > businessentities.SpmStatusResult at 78332453, log id: 3ea35d5 > 2018-02-20 14:42:22,935+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] > (org.ovirt.thread.pool-7-thread-38) [29528f9] Initialize Irs proxy from > vds: dev2node1.lares.com.ph > 2018-02-20 14:42:22,951+08 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-38) > [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call > Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Pool > Manager runs on Host Node1 (Address: dev2node1.lares.com.ph). > 2018-02-20 14:42:22,952+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-38) [29528f9] -- executeIrsBrokerCommand: > Attempting on storage pool '5a865884-0366-0330-02b8-0000000002d4' > 2018-02-20 14:42:22,952+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-38) [29528f9] START, > HSMGetAllTasksInfoVDSCommand(HostName = Node1, > VdsIdVDSCommandParametersBase:{runAsync='true', > hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4'}), log id: 1bdbea9d > 2018-02-20 14:42:22,955+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-7) [29528f9] START, > SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{runAsync='true', > storagePoolId='5a865884-0366-0330-02b8-0000000002d4', > ignoreFailoverLimit='false'}), log id: 5c2422d6 > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, > HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, > SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7 > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] > (org.ovirt.thread.pool-7-thread-38) [29528f9] Discovered no tasks on > Storage Pool 'UnsecuredEnv' > 2018-02-20 14:42:24,936+08 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] > (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.ph/10.10.43.2 > 2018-02-20 14:42:27,012+08 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] > (org.ovirt.thread.pool-7-thread-43) [] Master domain is not in sync > between DB and VDSM. Domain Node1Container marked as master in DB and not > in the storage > 2018-02-20 14:42:27,026+08 WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-43) > [] EVENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_SYNC(990), Correlation ID: null, > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Sync Error > on Master Domain between Host Node1 and oVirt Engine. Domain: > Node1Container is marked as Master in oVirt Engine database but not on the > Storage side. Please consult with Support on how to fix this issue. > 2018-02-20 14:42:27,103+08 INFO [org.ovirt.engine.core.bll.storage.pool. > ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-43) > [3e5965ca] Running command: ReconstructMasterDomainCommand internal: true. > Entities affected : ID: f3e372e3-1251-4195-a4b9-1027e40059df Type: > Storage > 2018-02-20 14:42:27,137+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.ResetIrsVDSCommand] (org.ovirt.thread.pool-7-thread-43) > [3e5965ca] START, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{runAsync='true', > storagePoolId='5a865884-0366-0330-02b8-0000000002d4', > ignoreFailoverLimit='false', vdsId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', > ignoreStopFailed='true'}), log id: 3e0a239d > 2018-02-20 14:42:27,140+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) > [3e5965ca] START, SpmStopVDSCommand(HostName = Node1, > SpmStopVDSCommandParameters:{runAsync='true', hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', > storagePoolId='5a865884-0366-0330-02b8-0000000002d4'}), log id: 7c67bf06 > 2018-02-20 14:42:28,144+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) > [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', pool id > '5a865884-0366-0330-02b8-0000000002d4' > > > On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan < > mhke_aj5566 at yahoo.com> wrote: > > > Thanks for quick response, > > see attachment. > > > On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky > wrote: > > > Hi, > > Can you please attach full Engine and VDSM logs? > > Thanks, > > On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan < > mhke_aj5566 at yahoo.com> wrote: > > My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on > Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing > for maintenance. When I tried to activate my old master domain (DATANd01 > on Node1 ) all storage domain goes down and up master keep on rotating. > > > Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) > > > Event Error: > Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain > Stored2 is marked as master in ovirt engine Database but not on storage > side Please consult with support > > VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or > it's version: u'SD=f3e372e3-1251-4195-a4b9- 1027e40059df, > pool=5a865884-0366-0330-02b8- 0000 > > VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () > > Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) > > Here's logs from engine: > > ------------------------------ ------------------------------ > ------------------------------ ------------- > [root at dev2engine ~]# tail /var/log/messages > Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. > Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. > Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. > Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. > Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. > Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. > Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. > Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. > Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. > Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. > > > ------------------------------ ------------------------------ > ------------------------------ ------------- > [root at dev2engine ~]# tail /var/log/ovirt-engine/engine. log > 2018-02-20 08:01:16,062+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) > [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:27,825+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:27,862+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) > [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:27,882+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) > [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:40,106+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:40,197+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) > [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:40,246+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) > [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:51,809+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:51,846+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) > [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:51,866+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) > [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > > > ______________________________ _________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/ mailman/listinfo/users > > > > > > -- > Regards, > Eyal Shenitzky > > > > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Feb 20 06:56:02 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Feb 2018 08:56:02 +0200 Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: <1802931026.2010356.1519108987398@mail.yahoo.com> References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> <1101288079.1992124.1519108431417@mail.yahoo.com> <1802931026.2010356.1519108987398@mail.yahoo.com> Message-ID: Also, can you please describe the current setup of the environment? I'm not sure I understand, do you have 2 data-centers? Please attach some screenshots of the current situation. On Tue, Feb 20, 2018 at 8:43 AM, michael pagdanganan wrote: > Sorry can't attached log file it's too big file > > > VDSM.log for node 1 > > 2766', 'lastCheck': '4.9', 'valid': True}} from=internal, > task_id=645d456e-f59f-4b1c-9e97-fc82d19a36b1 (api:52) > 2018-02-20 14:38:47,222+0800 INFO (jsonrpc/3) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, > task_id=28c795d1-1639-4e68-a9fe-a00006be268f (api:46) > 2018-02-20 14:38:47,222+0800 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck': > '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': > 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', > 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.00022988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000272766', 'lastCheck': '6.0', 'valid': True}} > from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68-a9fe-a00006be268f > (api:52) > 2018-02-20 14:38:47,226+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:38:55,252+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:38:58,566+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:39:01,093+0800 INFO (periodic/1) [vdsm.api] START > repoStats(options=None) from=internal, task_id=04f55ded-5841-44ae-a376-4f6e723e4b10 > (api:46) > 2018-02-20 14:39:01,093+0800 INFO (periodic/1) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', > 'lastCheck': '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000197535', 'lastCheck': '9.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000191456', 'lastCheck': '9.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.00026365', 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=internal, > task_id=04f55ded-5841-44ae-a376-4f6e723e4b10 (api:52) > 2018-02-20 14:39:02,295+0800 INFO (jsonrpc/6) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=13149d6e, > task_id=5f19d76a-d343-4dc4-ad09-024ec27f7443 (api:46) > 2018-02-20 14:39:02,295+0800 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck': > '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': > 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', > 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000222609', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000211253', 'lastCheck': '1.0', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000243658', 'lastCheck': '1.1', 'valid': True}} > from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4-ad09-024ec27f7443 > (api:52) > 2018-02-20 14:39:02,300+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.01 seconds (__init__:539) > 2018-02-20 14:39:10,270+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:39:13,631+0800 INFO (jsonrpc/0) [vdsm.api] START > connectStorageServer(domType=1, spUUID=u'00000000-0000-0000-0000-000000000000', > conList=[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7-e732d0fe3123', u'connection': > u'dev2node1.lares.com.ph:/run/media/root/Slave1Data/dataNode1', u'iqn': > u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', > u'password': '********', u'port': u''}], options=None) > from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13-aaaa-7a66aaf44429 > (api:46) > 2018-02-20 14:39:13,633+0800 INFO (jsonrpc/0) [vdsm.api] FINISH > connectStorageServer return={'statuslist': [{'status': 0, 'id': > u'5e4c94f1-f3b9-4fbd-a6c7-e732d0fe3123'}]} from=::ffff:10.10.43.1,60554, > flow_id=15b57417, task_id=bfce1e70-ef4a-4e13-aaaa-7a66aaf44429 (api:52) > 2018-02-20 14:39:13,634+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC > call StoragePool.connectStorageServer succeeded in 0.00 seconds > (__init__:539) > 2018-02-20 14:39:13,806+0800 INFO (jsonrpc/7) [vdsm.api] START > connectStoragePool(spUUID=u'5a865884-0366-0330-02b8-0000000002d4', > hostID=1, msdUUID=u'f3e372e3-1251-4195-a4b9-1027e40059df', > masterVersion=65, domainsMap={u'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > u'active', u'4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': u'active', > u'225e1975-8121-4370-b317-86e964ae326f': u'attached', > u'f3e372e3-1251-4195-a4b9-1027e40059df': u'active', > u'65ca2e2d-b472-4bee-85b4-09a161464b20': u'active', > u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'active'}, options=None) > from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=927a5d9a-4304-4776-b5c9-22ba5d0cb853 > (api:46) > 2018-02-20 14:39:13,807+0800 INFO (jsonrpc/7) [storage.StoragePoolMemoryBackend] > new storage pool master version 65 and domains map > {u'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': u'Active', > u'4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': u'Active', > u'225e1975-8121-4370-b317-86e964ae326f': u'Attached', > u'f3e372e3-1251-4195-a4b9-1027e40059df': u'Active', > u'65ca2e2d-b472-4bee-85b4-09a161464b20': u'Active', > u'42e591b7-f86c-4b67-a3d2-40cc007f7662': u'Active'} (spbackends:450) > > VDSM.log 2 > 2018-02-20 14:41:14,598+0800 INFO (periodic/3) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', > 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '7.5567e-05', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '8.2433e-05', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000237247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, > task_id=6213712b-9903-4db8-9836-3baf85cd63e4 (api:52) > 2018-02-20 14:41:18,074+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:24,333+0800 INFO (jsonrpc/4) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=73f86113, > task_id=e8231ecb-3543-4d8f-af54-4cf2b06ee98a (api:46) > 2018-02-20 14:41:24,334+0800 INFO (jsonrpc/4) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck': > '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': > 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', > 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000117677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.1467e-05', 'lastCheck': '5.8', 'valid': True}} > from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f-af54-4cf2b06ee98a > (api:52) > 2018-02-20 14:41:24,338+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:27,087+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:29,607+0800 INFO (periodic/1) [vdsm.api] START > repoStats(options=None) from=internal, task_id=f14b7aab-b64a-4903-9368-d665e39b49d1 > (api:46) > 2018-02-20 14:41:29,608+0800 INFO (periodic/1) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', > 'lastCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '8.8773e-05', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.7145e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=internal, > task_id=f14b7aab-b64a-4903-9368-d665e39b49d1 (api:52) > 2018-02-20 14:41:33,079+0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:40,456+0800 INFO (jsonrpc/2) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=3be8150c, > task_id=1b0c86a8-6fd8-4882-a742-fbd56ccb4037 (api:46) > 2018-02-20 14:41:40,457+0800 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': > '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': {'code': > 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', > 'lastCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000145955', 'lastCheck': '1.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000209871', 'lastCheck': '1.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.5744e-05', 'lastCheck': '1.9', 'valid': True}} > from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882-a742-fbd56ccb4037 > (api:52) > 2018-02-20 14:41:40,461+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.01 seconds (__init__:539) > 2018-02-20 14:41:42,106+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:44,622+0800 INFO (periodic/3) [vdsm.api] START > repoStats(options=None) from=internal, task_id=20e0b29b-d3cd-4e44-b92c-213f9c984ab2 > (api:46) > 2018-02-20 14:41:44,622+0800 INFO (periodic/3) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195-a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', > 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf-6e95c66b0454': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.00011454', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4-09a161464b20': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '0.000145955', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a-fb78f440a358': > {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': > '0.000209871', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2-40cc007f7662': > {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': > '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, > task_id=20e0b29b-d3cd-4e44-b92c-213f9c984ab2 (api:52) > 2018-02-20 14:41:49,083+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > > > Engine log > > skId '404ccecc-aa7f-45ea-89e4-726956269bc9' task status 'finished' > 2018-02-20 14:42:21,966+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-7-thread-38) > [29528f9] spmStart polling ended, spm status: SPM > 2018-02-20 14:42:21,967+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-38) > [29528f9] START, HSMClearTaskVDSCommand(HostName = Node1, > HSMTaskGuidBaseVDSCommandParameters:{runAsync='true', > hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', taskId='404ccecc-aa7f-45ea-89e4-726956269bc9'}), > log id: 71688f70 > 2018-02-20 14:42:22,922+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7-thread-38) > [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f70 > 2018-02-20 14:42:22,923+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.SpmStartVDSCommand] (org.ovirt.thread.pool-7-thread-38) > [29528f9] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common. > businessentities.SpmStatusResult at 78332453, log id: 3ea35d5 > 2018-02-20 14:42:22,935+08 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] > (org.ovirt.thread.pool-7-thread-38) [29528f9] Initialize Irs proxy from > vds: dev2node1.lares.com.ph > 2018-02-20 14:42:22,951+08 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-38) > [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call > Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Pool > Manager runs on Host Node1 (Address: dev2node1.lares.com.ph). > 2018-02-20 14:42:22,952+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-38) [29528f9] -- executeIrsBrokerCommand: > Attempting on storage pool '5a865884-0366-0330-02b8-0000000002d4' > 2018-02-20 14:42:22,952+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-38) [29528f9] START, > HSMGetAllTasksInfoVDSCommand(HostName = Node1, > VdsIdVDSCommandParametersBase:{runAsync='true', > hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4'}), log id: 1bdbea9d > 2018-02-20 14:42:22,955+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-7) [29528f9] START, > SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{runAsync='true', > storagePoolId='5a865884-0366-0330-02b8-0000000002d4', > ignoreFailoverLimit='false'}), log id: 5c2422d6 > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.HSMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, > HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7-thread-38) [29528f9] FINISH, > SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7 > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core.bll.tasks.AsyncTaskManager] > (org.ovirt.thread.pool-7-thread-38) [29528f9] Discovered no tasks on > Storage Pool 'UnsecuredEnv' > 2018-02-20 14:42:24,936+08 INFO [org.ovirt.vdsm.jsonrpc.client.reactors.ReactorClient] > (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.ph/10.10.43.2 > 2018-02-20 14:42:27,012+08 WARN [org.ovirt.engine.core.vdsbroker.irsbroker.IrsProxy] > (org.ovirt.thread.pool-7-thread-43) [] Master domain is not in sync > between DB and VDSM. Domain Node1Container marked as master in DB and not > in the storage > 2018-02-20 14:42:27,026+08 WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-7-thread-43) > [] EVENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_SYNC(990), Correlation ID: null, > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Sync Error > on Master Domain between Host Node1 and oVirt Engine. Domain: > Node1Container is marked as Master in oVirt Engine database but not on the > Storage side. Please consult with Support on how to fix this issue. > 2018-02-20 14:42:27,103+08 INFO [org.ovirt.engine.core.bll.storage.pool. > ReconstructMasterDomainCommand] (org.ovirt.thread.pool-7-thread-43) > [3e5965ca] Running command: ReconstructMasterDomainCommand internal: true. > Entities affected : ID: f3e372e3-1251-4195-a4b9-1027e40059df Type: > Storage > 2018-02-20 14:42:27,137+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.ResetIrsVDSCommand] (org.ovirt.thread.pool-7-thread-43) > [3e5965ca] START, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{runAsync='true', > storagePoolId='5a865884-0366-0330-02b8-0000000002d4', > ignoreFailoverLimit='false', vdsId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', > ignoreStopFailed='true'}), log id: 3e0a239d > 2018-02-20 14:42:27,140+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) > [3e5965ca] START, SpmStopVDSCommand(HostName = Node1, > SpmStopVDSCommandParameters:{runAsync='true', hostId='7dee35bb-8c97-4f6a-b6cd-abc4258540e4', > storagePoolId='5a865884-0366-0330-02b8-0000000002d4'}), log id: 7c67bf06 > 2018-02-20 14:42:28,144+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker.SpmStopVDSCommand] (org.ovirt.thread.pool-7-thread-43) > [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', pool id > '5a865884-0366-0330-02b8-0000000002d4' > > > On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan < > mhke_aj5566 at yahoo.com> wrote: > > > Thanks for quick response, > > see attachment. > > > On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky > wrote: > > > Hi, > > Can you please attach full Engine and VDSM logs? > > Thanks, > > On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan < > mhke_aj5566 at yahoo.com> wrote: > > My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on > Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing > for maintenance. When I tried to activate my old master domain (DATANd01 > on Node1 ) all storage domain goes down and up master keep on rotating. > > > Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) > > > Event Error: > Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain > Stored2 is marked as master in ovirt engine Database but not on storage > side Please consult with support > > VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or > it's version: u'SD=f3e372e3-1251-4195-a4b9- 1027e40059df, > pool=5a865884-0366-0330-02b8- 0000 > > VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () > > Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) > > Here's logs from engine: > > ------------------------------ ------------------------------ > ------------------------------ ------------- > [root at dev2engine ~]# tail /var/log/messages > Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. > Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. > Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. > Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. > Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. > Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. > Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. > Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. > Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. > Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. > > > ------------------------------ ------------------------------ > ------------------------------ ------------- > [root at dev2engine ~]# tail /var/log/ovirt-engine/engine. log > 2018-02-20 08:01:16,062+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) > [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:27,825+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:27,862+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) > [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:27,882+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) > [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:40,106+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:40,197+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) > [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:40,246+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) > [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:51,809+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:51,846+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) > [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:51,866+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) > [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > > > ______________________________ _________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/ mailman/listinfo/users > > > > > > -- > Regards, > Eyal Shenitzky > > > > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhke_aj5566 at yahoo.com Tue Feb 20 06:59:55 2018 From: mhke_aj5566 at yahoo.com (michael pagdanganan) Date: Tue, 20 Feb 2018 06:59:55 +0000 (UTC) Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> <1101288079.1992124.1519108431417@mail.yahoo.com> <1802931026.2010356.1519108987398@mail.yahoo.com> Message-ID: <351692054.2000353.1519109999931@mail.yahoo.com> Attached File compressed via winrar On Tuesday, February 20, 2018 2:52 PM, Eyal Shenitzky wrote: Please try to compress the logs maybe it will help. On Tue, Feb 20, 2018 at 8:43 AM, michael pagdanganan wrote: Sorry can't attached log file it's too big file VDSM.log for node 1 2766', 'lastCheck': '4.9', 'valid': True}} from=internal, task_id=645d456e-f59f-4b1c- 9e97-fc82d19a36b1 (api:52) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:46) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00022988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000272766', 'lastCheck': '6.0', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:52) 2018-02-20 14:38:47,226+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:55,252+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:58,566+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:46) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', 'lastCheck': '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000197535', 'lastCheck': '9.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000191456', 'lastCheck': '9.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00026365', 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=internal, task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:52) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:46) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck': '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000222609', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000211253', 'lastCheck': '1.0', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '1.1', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:52) 2018-02-20 14:39:02,300+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:39:10,270+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,631+0800 INFO? (jsonrpc/0) [vdsm.api] START connectStorageServer(domType= 1, spUUID=u'00000000-0000-0000- 0000-000000000000', conList=[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123', u'connection': u'dev2node1.lares.com.ph:/run/ media/root/Slave1Data/ dataNode1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:46) 2018-02-20 14:39:13,633+0800 INFO? (jsonrpc/0) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123'}]} from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:52) 2018-02-20 14:39:13,634+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StoragePool. connectStorageServer succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,806+0800 INFO? (jsonrpc/7) [vdsm.api] START connectStoragePool(spUUID=u' 5a865884-0366-0330-02b8- 0000000002d4', hostID=1, msdUUID=u'f3e372e3-1251-4195- a4b9-1027e40059df', masterVersion=65, domainsMap={u'e83d0d46-6ea6- 4aa3-80bf-6e95c66b0454': u'active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'active', u'225e1975-8121-4370-b317- 86e964ae326f': u'attached', u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'active', u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'active', u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'active'}, options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=927a5d9a-4304-4776- b5c9-22ba5d0cb853 (api:46) 2018-02-20 14:39:13,807+0800 INFO? (jsonrpc/7) [storage. StoragePoolMemoryBackend] new storage pool master version 65 and domains map {u'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': u'Active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'Active', u'225e1975-8121-4370-b317- 86e964ae326f': u'Attached', u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'Active', u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'Active', u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'Active'} (spbackends:450) VDSM.log 22018-02-20 14:41:14,598+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.5567e-05', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.2433e-05', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000237247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=6213712b-9903-4db8- 9836-3baf85cd63e4 (api:52) 2018-02-20 14:41:18,074+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:24,333+0800 INFO? (jsonrpc/4) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:46) 2018-02-20 14:41:24,334+0800 INFO? (jsonrpc/4) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck': '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000117677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1467e-05', 'lastCheck': '5.8', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:52) 2018-02-20 14:41:24,338+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:27,087+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:29,607+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:46) 2018-02-20 14:41:29,608+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', 'lastCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '8.8773e-05', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.7145e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=internal, task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:52) 2018-02-20 14:41:33,079+0800 INFO? (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:40,456+0800 INFO? (jsonrpc/2) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:46) 2018-02-20 14:41:40,457+0800 INFO? (jsonrpc/2) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '1.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '1.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '1.9', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:52) 2018-02-20 14:41:40,461+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:41:42,106+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] START repoStats(options=None) from=internal, task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:46) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:52) 2018-02-20 14:41:49,083+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) Engine log skId '404ccecc-aa7f-45ea-89e4- 726956269bc9' task status 'finished' 2018-02-20 14:42:21,966+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] spmStart polling ended, spm status: SPM 2018-02-20 14:42:21,967+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] START, HSMClearTaskVDSCommand( HostName = Node1, HSMTaskGuidBaseVDSCommandParam eters:{runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', taskId='404ccecc-aa7f-45ea- 89e4-726956269bc9'}), log id: 71688f70 2018-02-20 14:42:22,922+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f70 2018-02-20 14:42:22,923+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common. businessentities. SpmStatusResult at 78332453, log id: 3ea35d5 2018-02-20 14:42:22,935+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-38) [29528f9] Initialize Irs proxy from vds: dev2node1.lares.com.ph 2018-02-20 14:42:22,951+08 INFO? [org.ovirt.engine.core.dal. dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- thread-38) [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host Node1 (Address: dev2node1.lares.com.ph). 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] -- executeIrsBrokerCommand: Attempting on storage pool '5a865884-0366-0330-02b8- 0000000002d4' 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] START, HSMGetAllTasksInfoVDSCommand( HostName = Node1, VdsIdVDSCommandParametersBase: {runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4'}), log id: 1bdbea9d 2018-02-20 14:42:22,955+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-7) [29528f9] START, SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{ runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', ignoreFailoverLimit='false'}), log id: 5c2422d6 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core.bll. tasks.AsyncTaskManager] (org.ovirt.thread.pool-7- thread-38) [29528f9] Discovered no tasks on Storage Pool 'UnsecuredEnv' 2018-02-20 14:42:24,936+08 INFO? [org.ovirt.vdsm.jsonrpc. client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.ph/10.10. 43.2 2018-02-20 14:42:27,012+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-43) [] Master domain is not in sync between DB and VDSM. Domain Node1Container marked as master in DB and not in the storage 2018-02-20 14:42:27,026+08 WARN? [org.ovirt.engine.core.dal. dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- thread-43) [] EVENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_ SYNC(990), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Sync Error on Master Domain between Host Node1 and oVirt Engine. Domain: Node1Container is marked as Master in oVirt Engine database but not on the Storage side. Please consult with Support on how to fix this issue. 2018-02-20 14:42:27,103+08 INFO? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] Running command: ReconstructMasterDomainCommand internal: true. Entities affected :? ID: f3e372e3-1251-4195-a4b9- 1027e40059df Type: Storage 2018-02-20 14:42:27,137+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. ResetIrsVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] START, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{ runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', ignoreFailoverLimit='false', vdsId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', ignoreStopFailed='true'}), log id: 3e0a239d 2018-02-20 14:42:27,140+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] START, SpmStopVDSCommand(HostName = Node1, SpmStopVDSCommandParameters:{ runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4'}), log id: 7c67bf06 2018-02-20 14:42:28,144+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', pool id '5a865884-0366-0330-02b8- 0000000002d4' On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan wrote: Thanks for quick response, see attachment. On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky wrote: Hi,? Can you please attach full Engine and VDSM logs? Thanks, On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan wrote: My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing for maintenance. When I tried to activate my old master domain (DATANd01 on Node1 ) all? storage domain goes down and up master keep on rotating. Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) Event Error:Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain Stored2 is marked as master in ovirt engine Database but not on storage side Please consult with support VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or it's version: u'SD=f3e372e3-1251-4195-a4b9- 1027e40059df, pool=5a865884-0366-0330-02b8- 0000 VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) Here's logs from engine: ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/messages Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/ovirt-engine/engine. log 2018-02-20 08:01:16,062+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:27,825+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:27,862+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:27,882+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:40,106+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:40,197+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:40,246+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:51,809+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:51,846+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:51,866+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users -- Regards,Eyal Shenitzky -- Regards,Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ovirt log.rar Type: application/octet-stream Size: 1282494 bytes Desc: not available URL: From mhke_aj5566 at yahoo.com Tue Feb 20 07:03:42 2018 From: mhke_aj5566 at yahoo.com (michael pagdanganan) Date: Tue, 20 Feb 2018 07:03:42 +0000 (UTC) Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> <1101288079.1992124.1519108431417@mail.yahoo.com> <1802931026.2010356.1519108987398@mail.yahoo.com> Message-ID: <708667852.2018838.1519110223090@mail.yahoo.com> Hi, Screenshot only 1 data center but 2 master domain and got event warning On Tuesday, February 20, 2018 2:56 PM, Eyal Shenitzky wrote: Also, can you please describe the current setup of the environment? I'm not sure I understand, do you have 2 data-centers? Please attach some screenshots of the current situation. On Tue, Feb 20, 2018 at 8:43 AM, michael pagdanganan wrote: Sorry can't attached log file it's too big file VDSM.log for node 1 2766', 'lastCheck': '4.9', 'valid': True}} from=internal, task_id=645d456e-f59f-4b1c- 9e97-fc82d19a36b1 (api:52) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:46) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00022988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000272766', 'lastCheck': '6.0', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:52) 2018-02-20 14:38:47,226+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:55,252+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:58,566+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:46) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', 'lastCheck': '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000197535', 'lastCheck': '9.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000191456', 'lastCheck': '9.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00026365', 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=internal, task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:52) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:46) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck': '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000222609', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000211253', 'lastCheck': '1.0', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '1.1', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:52) 2018-02-20 14:39:02,300+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:39:10,270+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,631+0800 INFO? (jsonrpc/0) [vdsm.api] START connectStorageServer(domType= 1, spUUID=u'00000000-0000-0000- 0000-000000000000', conList=[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123', u'connection': u'dev2node1.lares.com.ph:/run/ media/root/Slave1Data/ dataNode1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:46) 2018-02-20 14:39:13,633+0800 INFO? (jsonrpc/0) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123'}]} from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:52) 2018-02-20 14:39:13,634+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StoragePool. connectStorageServer succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,806+0800 INFO? (jsonrpc/7) [vdsm.api] START connectStoragePool(spUUID=u' 5a865884-0366-0330-02b8- 0000000002d4', hostID=1, msdUUID=u'f3e372e3-1251-4195- a4b9-1027e40059df', masterVersion=65, domainsMap={u'e83d0d46-6ea6- 4aa3-80bf-6e95c66b0454': u'active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'active', u'225e1975-8121-4370-b317- 86e964ae326f': u'attached', u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'active', u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'active', u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'active'}, options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=927a5d9a-4304-4776- b5c9-22ba5d0cb853 (api:46) 2018-02-20 14:39:13,807+0800 INFO? (jsonrpc/7) [storage. StoragePoolMemoryBackend] new storage pool master version 65 and domains map {u'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': u'Active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'Active', u'225e1975-8121-4370-b317- 86e964ae326f': u'Attached', u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'Active', u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'Active', u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'Active'} (spbackends:450) VDSM.log 22018-02-20 14:41:14,598+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.5567e-05', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.2433e-05', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000237247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=6213712b-9903-4db8- 9836-3baf85cd63e4 (api:52) 2018-02-20 14:41:18,074+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:24,333+0800 INFO? (jsonrpc/4) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:46) 2018-02-20 14:41:24,334+0800 INFO? (jsonrpc/4) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck': '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000117677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1467e-05', 'lastCheck': '5.8', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:52) 2018-02-20 14:41:24,338+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:27,087+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:29,607+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:46) 2018-02-20 14:41:29,608+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', 'lastCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '8.8773e-05', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.7145e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=internal, task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:52) 2018-02-20 14:41:33,079+0800 INFO? (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:40,456+0800 INFO? (jsonrpc/2) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:46) 2018-02-20 14:41:40,457+0800 INFO? (jsonrpc/2) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '1.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '1.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '1.9', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:52) 2018-02-20 14:41:40,461+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:41:42,106+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] START repoStats(options=None) from=internal, task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:46) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:52) 2018-02-20 14:41:49,083+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) Engine log skId '404ccecc-aa7f-45ea-89e4- 726956269bc9' task status 'finished' 2018-02-20 14:42:21,966+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] spmStart polling ended, spm status: SPM 2018-02-20 14:42:21,967+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] START, HSMClearTaskVDSCommand( HostName = Node1, HSMTaskGuidBaseVDSCommandParam eters:{runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', taskId='404ccecc-aa7f-45ea- 89e4-726956269bc9'}), log id: 71688f70 2018-02-20 14:42:22,922+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f70 2018-02-20 14:42:22,923+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common. businessentities. SpmStatusResult at 78332453, log id: 3ea35d5 2018-02-20 14:42:22,935+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-38) [29528f9] Initialize Irs proxy from vds: dev2node1.lares.com.ph 2018-02-20 14:42:22,951+08 INFO? [org.ovirt.engine.core.dal. dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- thread-38) [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host Node1 (Address: dev2node1.lares.com.ph). 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] -- executeIrsBrokerCommand: Attempting on storage pool '5a865884-0366-0330-02b8- 0000000002d4' 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] START, HSMGetAllTasksInfoVDSCommand( HostName = Node1, VdsIdVDSCommandParametersBase: {runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4'}), log id: 1bdbea9d 2018-02-20 14:42:22,955+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-7) [29528f9] START, SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{ runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', ignoreFailoverLimit='false'}), log id: 5c2422d6 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core.bll. tasks.AsyncTaskManager] (org.ovirt.thread.pool-7- thread-38) [29528f9] Discovered no tasks on Storage Pool 'UnsecuredEnv' 2018-02-20 14:42:24,936+08 INFO? [org.ovirt.vdsm.jsonrpc. client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.ph/10.10. 43.2 2018-02-20 14:42:27,012+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-43) [] Master domain is not in sync between DB and VDSM. Domain Node1Container marked as master in DB and not in the storage 2018-02-20 14:42:27,026+08 WARN? [org.ovirt.engine.core.dal. dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- thread-43) [] EVENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_ SYNC(990), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Sync Error on Master Domain between Host Node1 and oVirt Engine. Domain: Node1Container is marked as Master in oVirt Engine database but not on the Storage side. Please consult with Support on how to fix this issue. 2018-02-20 14:42:27,103+08 INFO? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] Running command: ReconstructMasterDomainCommand internal: true. Entities affected :? ID: f3e372e3-1251-4195-a4b9- 1027e40059df Type: Storage 2018-02-20 14:42:27,137+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. ResetIrsVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] START, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{ runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', ignoreFailoverLimit='false', vdsId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', ignoreStopFailed='true'}), log id: 3e0a239d 2018-02-20 14:42:27,140+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] START, SpmStopVDSCommand(HostName = Node1, SpmStopVDSCommandParameters:{ runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4'}), log id: 7c67bf06 2018-02-20 14:42:28,144+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', pool id '5a865884-0366-0330-02b8- 0000000002d4' On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan wrote: Thanks for quick response, see attachment. On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky wrote: Hi,? Can you please attach full Engine and VDSM logs? Thanks, On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan wrote: My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing for maintenance. When I tried to activate my old master domain (DATANd01 on Node1 ) all? storage domain goes down and up master keep on rotating. Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) Event Error:Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain Stored2 is marked as master in ovirt engine Database but not on storage side Please consult with support VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or it's version: u'SD=f3e372e3-1251-4195-a4b9- 1027e40059df, pool=5a865884-0366-0330-02b8- 0000 VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) Here's logs from engine: ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/messages Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/ovirt-engine/engine. log 2018-02-20 08:01:16,062+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:27,825+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:27,862+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:27,882+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:40,106+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:40,197+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:40,246+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:51,809+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:51,846+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:51,866+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users -- Regards,Eyal Shenitzky -- Regards,Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 75716 bytes Desc: not available URL: From k0ste at k0ste.ru Tue Feb 20 07:21:36 2018 From: k0ste at k0ste.ru (Konstantin Shalygin) Date: Tue, 20 Feb 2018 14:21:36 +0700 Subject: [ovirt-users] oVirt 4.2 with cheph In-Reply-To: References: Message-ID: <6c879caf-5469-ad89-930b-496fad62fbd4@k0ste.ru> > Hello, > > does someone have experience with cephfs as a vm-storage domain? I think > about that but without any hints... > > Thanks for pointing me... This is bad idea. Use rbd - this is interface for VM, cephfs is for different things. k From eshenitz at redhat.com Tue Feb 20 07:35:07 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Feb 2018 09:35:07 +0200 Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: <708667852.2018838.1519110223090@mail.yahoo.com> References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> <1101288079.1992124.1519108431417@mail.yahoo.com> <1802931026.2010356.1519108987398@mail.yahoo.com> <708667852.2018838.1519110223090@mail.yahoo.com> Message-ID: What about the hosts status? Can you please send the hosts screenshot and the datacenter screenshot? On Tue, Feb 20, 2018 at 9:03 AM, michael pagdanganan wrote: > Hi, > > Screenshot > > only 1 data center but 2 master domain and got event warning > > [image: Inline image] > > > On Tuesday, February 20, 2018 2:56 PM, Eyal Shenitzky > wrote: > > > Also, can you please describe the current setup of the environment? > > I'm not sure I understand, do you have 2 data-centers? > > Please attach some screenshots of the current situation. > > On Tue, Feb 20, 2018 at 8:43 AM, michael pagdanganan < > mhke_aj5566 at yahoo.com> wrote: > > Sorry can't attached log file it's too big file > > > VDSM.log for node 1 > > 2766', 'lastCheck': '4.9', 'valid': True}} from=internal, > task_id=645d456e-f59f-4b1c- 9e97-fc82d19a36b1 (api:52) > 2018-02-20 14:38:47,222+0800 INFO (jsonrpc/3) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, > task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:46) > 2018-02-20 14:38:47,222+0800 INFO (jsonrpc/3) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck': > '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', > 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- > 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, > 'delay': '0.00022988', 'lastCheck': '6.0', 'valid': True}, > '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, > 'version': 0, 'acquired': True, 'delay': '0.000270206', 'lastCheck': '5.9', > 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000272766', > 'lastCheck': '6.0', 'valid': True}} from=::ffff:10.10.43.1,60554, > flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:52) > 2018-02-20 14:38:47,226+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:38:55,252+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:38:58,566+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:39:01,093+0800 INFO (periodic/1) [vdsm.api] START > repoStats(options=None) from=internal, task_id=04f55ded-5841-44ae- > a376-4f6e723e4b10 (api:46) > 2018-02-20 14:39:01,093+0800 INFO (periodic/1) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', > 'lastCheck': '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- > 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, > 'delay': '0.000197535', 'lastCheck': '9.8', 'valid': True}, > '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000191456', 'lastCheck': '9.8', > 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00026365', > 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- > 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, > 'delay': '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=internal, > task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:52) > 2018-02-20 14:39:02,295+0800 INFO (jsonrpc/6) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=13149d6e, > task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:46) > 2018-02-20 14:39:02,295+0800 INFO (jsonrpc/6) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck': > '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', > 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- > 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, > 'delay': '0.000222609', 'lastCheck': '1.0', 'valid': True}, > '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, > 'version': 0, 'acquired': True, 'delay': '0.000211253', 'lastCheck': '1.0', > 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', > 'lastCheck': '1.1', 'valid': True}} from=::ffff:10.10.43.1,60554, > flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:52) > 2018-02-20 14:39:02,300+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.01 seconds (__init__:539) > 2018-02-20 14:39:10,270+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:39:13,631+0800 INFO (jsonrpc/0) [vdsm.api] START > connectStorageServer(domType= 1, spUUID=u'00000000-0000-0000- > 0000-000000000000', conList=[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7- > e732d0fe3123', u'connection': u'dev2node1.lares.com.ph:/run/ > media/root/Slave1Data/ dataNode1', u'iqn': u'', u'user': u'', u'tpgt': > u'1', u'protocol_version': u'auto', u'password': '********', u'port': > u''}], options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, > task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:46) > 2018-02-20 14:39:13,633+0800 INFO (jsonrpc/0) [vdsm.api] FINISH > connectStorageServer return={'statuslist': [{'status': 0, 'id': > u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123'}]} from=::ffff:10.10.43.1,60554, > flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:52) > 2018-02-20 14:39:13,634+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC > call StoragePool. connectStorageServer succeeded in 0.00 seconds > (__init__:539) > 2018-02-20 14:39:13,806+0800 INFO (jsonrpc/7) [vdsm.api] START > connectStoragePool(spUUID=u' 5a865884-0366-0330-02b8- 0000000002d4', > hostID=1, msdUUID=u'f3e372e3-1251-4195- a4b9-1027e40059df', > masterVersion=65, domainsMap={u'e83d0d46-6ea6- 4aa3-80bf-6e95c66b0454': > u'active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'active', > u'225e1975-8121-4370-b317- 86e964ae326f': u'attached', > u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'active', > u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'active', > u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'active'}, options=None) > from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=927a5d9a-4304-4776- > b5c9-22ba5d0cb853 (api:46) > 2018-02-20 14:39:13,807+0800 INFO (jsonrpc/7) [storage. > StoragePoolMemoryBackend] new storage pool master version 65 and domains > map {u'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': u'Active', > u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'Active', > u'225e1975-8121-4370-b317- 86e964ae326f': u'Attached', > u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'Active', > u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'Active', > u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'Active'} (spbackends:450) > > VDSM.log 2 > 2018-02-20 14:41:14,598+0800 INFO (periodic/3) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', > 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- > 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, > 'delay': '7.5567e-05', 'lastCheck': '6.0', 'valid': True}, > '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '8.2433e-05', 'lastCheck': '6.0', > 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000237247', > 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- > 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, > 'delay': '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, > task_id=6213712b-9903-4db8- 9836-3baf85cd63e4 (api:52) > 2018-02-20 14:41:18,074+0800 INFO (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:24,333+0800 INFO (jsonrpc/4) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=73f86113, > task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:46) > 2018-02-20 14:41:24,334+0800 INFO (jsonrpc/4) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck': > '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', > 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4- > 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, > 'delay': '0.000117677', 'lastCheck': '5.7', 'valid': True}, > '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, > 'version': 0, 'acquired': True, 'delay': '0.000185146', 'lastCheck': '5.7', > 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1467e-05', > 'lastCheck': '5.8', 'valid': True}} from=::ffff:10.10.43.1,56540, > flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:52) > 2018-02-20 14:41:24,338+0800 INFO (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:27,087+0800 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:29,607+0800 INFO (periodic/1) [vdsm.api] START > repoStats(options=None) from=internal, task_id=f14b7aab-b64a-4903- > 9368-d665e39b49d1 (api:46) > 2018-02-20 14:41:29,608+0800 INFO (periodic/1) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', > 'lastCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- > 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, > 'delay': '8.8773e-05', 'lastCheck': '1.0', 'valid': True}, > '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '9.7145e-05', 'lastCheck': '1.0', > 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000218729', > 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- > 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, > 'delay': '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=internal, > task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:52) > 2018-02-20 14:41:33,079+0800 INFO (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:40,456+0800 INFO (jsonrpc/2) [vdsm.api] START > repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=3be8150c, > task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:46) > 2018-02-20 14:41:40,457+0800 INFO (jsonrpc/2) [vdsm.api] FINISH repoStats > return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': > True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': > '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', > 'lastCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- > 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, > 'delay': '0.000145955', 'lastCheck': '1.8', 'valid': True}, > '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, > 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '1.8', > 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', > 'lastCheck': '1.9', 'valid': True}} from=::ffff:10.10.43.1,56540, > flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:52) > 2018-02-20 14:41:40,461+0800 INFO (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC > call Host.getStats succeeded in 0.01 seconds (__init__:539) > 2018-02-20 14:41:42,106+0800 INFO (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > 2018-02-20 14:41:44,622+0800 INFO (periodic/3) [vdsm.api] START > repoStats(options=None) from=internal, task_id=20e0b29b-d3cd-4e44- > b92c-213f9c984ab2 (api:46) > 2018-02-20 14:41:44,622+0800 INFO (periodic/3) [vdsm.api] FINISH > repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, > 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', > 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- > 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, > 'delay': '0.00011454', 'lastCheck': '6.0', 'valid': True}, > '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, > 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '6.0', > 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, > 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', > 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- > 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, > 'delay': '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, > task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:52) > 2018-02-20 14:41:49,083+0800 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC > call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) > > > Engine log > > skId '404ccecc-aa7f-45ea-89e4- 726956269bc9' task status 'finished' > 2018-02-20 14:42:21,966+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- > thread-38) [29528f9] spmStart polling ended, spm status: SPM > 2018-02-20 14:42:21,967+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- > thread-38) [29528f9] START, HSMClearTaskVDSCommand( HostName = Node1, > HSMTaskGuidBaseVDSCommandParam eters:{runAsync='true', > hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', taskId='404ccecc-aa7f-45ea- > 89e4-726956269bc9'}), log id: 71688f70 > 2018-02-20 14:42:22,922+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- > thread-38) [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f70 > 2018-02-20 14:42:22,923+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- > thread-38) [29528f9] FINISH, SpmStartVDSCommand, return: > org.ovirt.engine.core.common. businessentities. SpmStatusResult at 78332453, > log id: 3ea35d5 > 2018-02-20 14:42:22,935+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-38) > [29528f9] Initialize Irs proxy from vds: dev2node1.lares.com.ph > 2018-02-20 14:42:22,951+08 INFO [org.ovirt.engine.core.dal. > dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- > thread-38) [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: > null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: > Storage Pool Manager runs on Host Node1 (Address: dev2node1.lares.com.ph). > 2018-02-20 14:42:22,952+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7- thread-38) [29528f9] -- executeIrsBrokerCommand: > Attempting on storage pool '5a865884-0366-0330-02b8- 0000000002d4' > 2018-02-20 14:42:22,952+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7- thread-38) [29528f9] START, > HSMGetAllTasksInfoVDSCommand( HostName = Node1, > VdsIdVDSCommandParametersBase: {runAsync='true', > hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4'}), log id: 1bdbea9d > 2018-02-20 14:42:22,955+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7- thread-7) [29528f9] START, > SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{ > runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', > ignoreFailoverLimit='false'}), log id: 5c2422d6 > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, > HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] > (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, > SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7 > 2018-02-20 14:42:23,956+08 INFO [org.ovirt.engine.core.bll. > tasks.AsyncTaskManager] (org.ovirt.thread.pool-7- thread-38) [29528f9] > Discovered no tasks on Storage Pool 'UnsecuredEnv' > 2018-02-20 14:42:24,936+08 INFO [org.ovirt.vdsm.jsonrpc. > client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.ph/10.10. > 43.2 > 2018-02-20 14:42:27,012+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-43) [] > Master domain is not in sync between DB and VDSM. Domain Node1Container > marked as master in DB and not in the storage > 2018-02-20 14:42:27,026+08 WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- > thread-43) [] EVENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_ SYNC(990), Correlation > ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: > Sync Error on Master Domain between Host Node1 and oVirt Engine. Domain: > Node1Container is marked as Master in oVirt Engine database but not on the > Storage side. Please consult with Support on how to fix this issue. > 2018-02-20 14:42:27,103+08 INFO [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-43) > [3e5965ca] Running command: ReconstructMasterDomainCommand internal: true. > Entities affected : ID: f3e372e3-1251-4195-a4b9- 1027e40059df Type: Storage > 2018-02-20 14:42:27,137+08 INFO [org.ovirt.engine.core. > vdsbroker.irsbroker. ResetIrsVDSCommand] (org.ovirt.thread.pool-7- > thread-43) [3e5965ca] START, ResetIrsVDSCommand( > ResetIrsVDSCommandParameters:{ runAsync='true', > storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', > ignoreFailoverLimit='false', vdsId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', > ignoreStopFailed='true'}), log id: 3e0a239d > 2018-02-20 14:42:27,140+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- > thread-43) [3e5965ca] START, SpmStopVDSCommand(HostName = Node1, > SpmStopVDSCommandParameters:{ runAsync='true', hostId='7dee35bb-8c97-4f6a- > b6cd-abc4258540e4', storagePoolId='5a865884-0366- > 0330-02b8-0000000002d4'}), log id: 7c67bf06 > 2018-02-20 14:42:28,144+08 INFO [org.ovirt.engine.core. > vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- > thread-43) [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', pool > id '5a865884-0366-0330-02b8- 0000000002d4' > > > On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan < > mhke_aj5566 at yahoo.com> wrote: > > > Thanks for quick response, > > see attachment. > > > On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky > wrote: > > > Hi, > > Can you please attach full Engine and VDSM logs? > > Thanks, > > On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan < > mhke_aj5566 at yahoo.com> wrote: > > My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on > Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing > for maintenance. When I tried to activate my old master domain (DATANd01 > on Node1 ) all storage domain goes down and up master keep on rotating. > > > Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) > > > Event Error: > Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain > Stored2 is marked as master in ovirt engine Database but not on storage > side Please consult with support > > VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or > it's version: u'SD=f3e372e3-1251-4195-a4b9- 1027e40059df, > pool=5a865884-0366-0330-02b8- 0000 > > VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () > > Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) > > Here's logs from engine: > > ------------------------------ ------------------------------ > ------------------------------ ------------- > [root at dev2engine ~]# tail /var/log/messages > Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. > Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. > Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. > Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. > Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. > Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. > Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. > Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. > Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. > Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. > > > ------------------------------ ------------------------------ > ------------------------------ ------------- > [root at dev2engine ~]# tail /var/log/ovirt-engine/engine. log > 2018-02-20 08:01:16,062+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) > [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:27,825+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:27,862+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) > [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:27,882+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) > [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:40,106+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:40,197+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) > [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:40,246+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) > [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > 2018-02-20 08:01:51,809+08 WARN [org.ovirt.engine.core. > vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] > Master domain is not in sync between DB and VDSM. Domain Stored2 marked as > master in DB and not in the storage > 2018-02-20 08:01:51,846+08 WARN [org.ovirt.engine.core.bll. storage.pool. > ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) > [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user > SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ > DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status > PreparingForMaintenance > 2018-02-20 08:01:51,866+08 INFO [org.ovirt.engine.core.bll. > eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) > [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8- > 0000000002d4'. Clearing event queue > > > ______________________________ _________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/ mailman/listinfo/users > > > > > > -- > Regards, > Eyal Shenitzky > > > > > > > > -- > Regards, > Eyal Shenitzky > > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 75716 bytes Desc: not available URL: From sakhi at sanren.ac.za Tue Feb 20 07:35:40 2018 From: sakhi at sanren.ac.za (Sakhi Hadebe) Date: Tue, 20 Feb 2018 09:35:40 +0200 Subject: [ovirt-users] Ovirt Cluster Setup Message-ID: I have 3 Dell R515 servers all installed with centOS 7, and trying to setup an oVirt Cluster. Disks configurations: 2 x 1TB - Raid1 - OS Deployment 6 x 1TB - Raid 6 - Storage ?Memory is 128GB I am following this documentation https://www.ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ and I am getting the issue below: PLAY [gluster_servers] ********************************************************* TASK [Run a shell script] ****************************************************** fatal: [ovirt2.sanren.ac.za]: FAILED! => {"msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ovirt3.sanren.ac.za]: FAILED! => {"msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} fatal: [ovirt1.sanren.ac.za]: FAILED! => {"msg": "The conditional check 'result.rc != 0' failed. The error was: error while evaluating conditional (result.rc != 0): 'dict object' has no attribute 'rc'"} to retry, use: --limit @/tmp/tmpxFXyGG/run-script.retry PLAY RECAP ********************************************************************* ovirt1.sanren.ac.za : ok=0 changed=0 unreachable=0 failed=1 ovirt2.sanren.ac.za : ok=0 changed=0 unreachable=0 failed=1 ovirt3.sanren.ac.za : ok=0 changed=0 unreachable=0 failed=1 *Error: Ansible(>= 2.2) is not installed.* *Some of the features might not work if not installed.* ?I have installed ansible2.4 in all the servers, but the error persists. Is there anything I can do to get rid of this error? -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi at sanren.ac.za -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jeremy_Tourville at hotmail.com Mon Feb 19 18:10:22 2018 From: Jeremy_Tourville at hotmail.com (Jeremy Tourville) Date: Mon, 19 Feb 2018 18:10:22 +0000 Subject: [ovirt-users] Spice Client Connection Issues Using aSpice In-Reply-To: References: , Message-ID: Hi Tomas, To answer your question, yes I am really trying to use aSpice. I appreciate your suggestion. I'm not sure if it meets my objective. Maybe our goals are different? It seems to me that movirt is built around portable management of the ovirt environment. I am attempting to provide a VDI type experience for running a vm. My goal is to run a lab environment with 30 chromebooks loaded with a spice clent. The spice client would of course connect to the 30 vms running Kali and each session would be independent of each other. I did a little further testing with a different client. (spice plugin for chrome). When I attempted to connect using that client I got a slightly different error message. The message still seemed to be of the same nature- i.e.: there is a problem with SSL protocol and communication. Are you suggesting that movirt can help set up the proper certficates and config the vms to use spice? Thanks! ________________________________ From: Tomas Jelinek Sent: Monday, February 19, 2018 4:19 AM To: Jeremy Tourville Cc: users at ovirt.org Subject: Re: [ovirt-users] Spice Client Connection Issues Using aSpice On Sun, Feb 18, 2018 at 5:32 PM, Jeremy Tourville > wrote: Hello, I am having trouble connecting to my guest vm (Kali Linux) which is running spice. My engine is running version: 4.2.1.7-1.el7.centos. I am using oVirt Node as my host running version: 4.2.1.1. I have taken the following steps to try and get everything running properly. 1. Download the root CA certificate https://ovirtengine.lan/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA 2. Edit the vm and define the graphical console entries. Video type is set to QXL, Graphics protocol is spice, USB support is enabled. 3. Install the guest agent in Debian per the instructions here - https://www.ovirt.org/documentation/how-to/guest-agent/install-the-guest-agent-in-debian/ It is my understanding that installing the guest agent will also install the virt IO device drivers. 4. Install the spice-vdagent per the instructions here - https://www.ovirt.org/documentation/how-to/guest-agent/install-the-spice-guest-agent/ 5. On the aSpice client I have imported the CA certficate from step 1 above. I defined the connection using the IP of my Node and TLS port 5901. are you really using aSPICE client (e.g. the android SPICE client?). If yes, maybe you want to try to open it using moVirt (https://play.google.com/store/apps/details?id=org.ovirt.mobile.movirt&hl=en) which delegates the console to aSPICE but configures everything including the certificates on it. Should be much simpler than configuring it by hand.. To troubleshoot my connection issues I confirmed the port being used to listen. virsh # domdisplay Kali spice://172.30.42.12?tls-port=5901 I see the following when attempting to connect. tail -f /var/log/libvirt/qemu/Kali.log 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:s3_pkt.c:1493:SSL alert number 80 ((null):27595): Spice-Warning **: reds_stream.c:379:reds_stream_ssl_accept: SSL_accept failed, error=1 I came across some documentation that states in the caveat section "Certificate of spice SSL should be separate certificate." https://www.ovirt.org/develop/release-management/features/infra/pki/ Is this still the case for version 4? The document references version 3.2 and 3.3. If so, how do I generate a new certificate for use with spice? Please let me know if you require further info to troubleshoot, I am happy to provide it. Many thanks in advance. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjelinek at redhat.com Tue Feb 20 07:43:47 2018 From: tjelinek at redhat.com (Tomas Jelinek) Date: Tue, 20 Feb 2018 08:43:47 +0100 Subject: [ovirt-users] 4.2 VM Portal -Create- VM section issue In-Reply-To: <06DC6162-4F48-4F6A-82F4-035A12715C64@ictv.com> References: <06DC6162-4F48-4F6A-82F4-035A12715C64@ictv.com> Message-ID: Hey Marko, thank you for all the logs, they helped me to understand the issue! We have hit something similar and started to fix it (wrongly :) ), so I explained the real problem here https://github.com/oVirt/ovirt-web-ui/pull/494/files#diff-3a773596665964819aa579d52d9feb94 so this PR will fix the actual issue you are facing. thank you, Tomas On Sat, Feb 17, 2018 at 8:11 AM, Vrgotic, Marko wrote: > Dear Tomas, > > > > In addition to previous email, find attached javascript console output > from browser: > > > > Kind regards, > > Marko Vrgotic > > > > *From: *"Vrgotic, Marko" > *Date: *Thursday, 25 January 2018 at 13:52 > *To: *Tomas Jelinek > *Cc: *users > *Subject: *Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue > > > > Hi Tomas, > > > > Thank you. > > > > VM does get created, so I think permission are in order: I will attach > them in next reply. > > > > As soon as possible I will attach all logs related. > > > > -- > > Met vriendelijke groet / Best regards, > > Marko Vrgotic > > System Engineer/Customer Care > > ActiveVideo > > > > > > *From: *"Vrgotic, Marko" > *Date: *Thursday, 25 January 2018 at 13:18 > *To: *Tomas Jelinek > *Cc: *users , "users-request at ovirt.org" < > users-request at ovirt.org> > *Subject: *Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue > > > > Hi Tomas, > > > > Thank you. > > > > VM does get created, so I think permission are in order: I will attach > them in next reply. > > > > As soon as possible I will attach all logs related. > > > > -- > > Met vriendelijke groet / Best regards, > > Marko Vrgotic > > System Engineer/Customer Care > > ActiveVideo > > > > > > *From: *Tomas Jelinek > *Date: *Thursday, 25 January 2018 at 13:03 > *To: *"Vrgotic, Marko" > *Cc: *users , "users-request at ovirt.org" < > users-request at ovirt.org> > *Subject: *Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue > > > > > > > > On 24 Jan 2018 5:17 p.m., "Vrgotic, Marko" > wrote: > > Dear oVirt, > > > > After setting all parameters for new VM and clicking on ?Create? button, > no progress status or that action is accepted is seen from webui. > > In addition, when closing the add VM section, I am asked if I am sure, due > to changes made. > > > > Is this expected behaviour? Can something be done about? > > no, it is not. > > > > can you please provide the logs from the javascript console in browser? > > > > can you please make sure the user has permissions to create a vm? > > > > > > Kindly awaiting your reply. > > > > -- > > Met vriendelijke groet / Best regards, > > Marko Vrgotic > > System Engineer/Customer Care > > ActiveVideo > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjelinek at redhat.com Tue Feb 20 07:59:39 2018 From: tjelinek at redhat.com (Tomas Jelinek) Date: Tue, 20 Feb 2018 08:59:39 +0100 Subject: [ovirt-users] Spice Client Connection Issues Using aSpice In-Reply-To: References: Message-ID: On Mon, Feb 19, 2018 at 7:10 PM, Jeremy Tourville < Jeremy_Tourville at hotmail.com> wrote: > Hi Tomas, > > To answer your question, yes I am really trying to use aSpice. > > > I appreciate your suggestion. I'm not sure if it meets my objective. Maybe > our goals are different? It seems to me that movirt is built around > portable management of the ovirt environment. I am attempting to provide a > VDI type experience for running a vm. My goal is to run a lab environment > with 30 chromebooks loaded with a spice clent. The spice client would of > course connect to the 30 vms running Kali and each session would be > independent of each other. > yes, it looks like a different use case > > I did a little further testing with a different client. (spice plugin > for chrome). When I attempted to connect using that client I got a > slightly different error message. The message still seemed to be of the > same nature- i.e.: there is a problem with SSL protocol and communication. > > > > Are you suggesting that movirt can help set up the proper certficates and > config the vms to use spice? Thanks! > moVirt has been developed for quite some time and works pretty well, this is why I recommended it. But anyway, you have a different use case. What I think the issue is, is that oVirt can have different CAs set for console communication and for API. And I think you are trying to configure aSPICE to use the one for API. What moVirt does to make sure it is using the correct CA to put into the aSPICE is that it downloads the .vv file of the VM (e.g. you can just connect to console using webadmin and save the .vv file somewhere), parse it and use the CA= part from it as a certificate. This one is guaranteed to be the correct one. For more details about what else it takes from the .vv file you can check here: the parsing: https://github.com/oVirt/moVirt/blob/master/moVirt/src/main/java/org/ovirt/mobile/movirt/rest/client/httpconverter/VvFileHttpMessageConverter.java configuration of aSPICE: https://github.com/oVirt/moVirt/blob/master/moVirt/src/main/java/org/ovirt/mobile/movirt/util/ConsoleHelper.java enjoy :) > > > ------------------------------ > *From:* Tomas Jelinek > *Sent:* Monday, February 19, 2018 4:19 AM > *To:* Jeremy Tourville > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] Spice Client Connection Issues Using aSpice > > > > On Sun, Feb 18, 2018 at 5:32 PM, Jeremy Tourville < > Jeremy_Tourville at hotmail.com> wrote: > > Hello, > > I am having trouble connecting to my guest vm (Kali Linux) which is > running spice. My engine is running version: 4.2.1.7-1.el7.centos. > > I am using oVirt Node as my host running version: 4.2.1.1. > > > I have taken the following steps to try and get everything running > properly. > > 1. Download the root CA certificate https://ovirtengin > e.lan/ovirt-engine/services/pki-resource?resource=ca- > certificate&format=X509-PEM-CA > > 2. Edit the vm and define the graphical console entries. Video type > is set to QXL, Graphics protocol is spice, USB support is enabled. > 3. Install the guest agent in Debian per the instructions here - > https://www.ovirt.org/documentation/how-to/guest-agent/ > install-the-guest-agent-in-debian/ > > It is my understanding that installing the guest agent will also install > the virt IO device drivers. > 4. Install the spice-vdagent per the instructions here - > https://www.ovirt.org/documentation/how-to/guest-agent/ > install-the-spice-guest-agent/ > > 5. On the aSpice client I have imported the CA certficate from step 1 > above. I defined the connection using the IP of my Node and TLS port 5901. > > > are you really using aSPICE client (e.g. the android SPICE client?). If > yes, maybe you want to try to open it using moVirt ( > https://play.google.com/store/apps/details?id=org. > ovirt.mobile.movirt&hl=en) which delegates the console to aSPICE but > configures everything including the certificates on it. Should be much > simpler than configuring it by hand.. > > > > To troubleshoot my connection issues I confirmed the port being used to > listen. > virsh # domdisplay Kali > spice://172.30.42.12?tls-port=5901 > > I see the following when attempting to connect. > tail -f /var/log/libvirt/qemu/Kali.log > > 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert > internal error:s3_pkt.c:1493:SSL alert number 80 > ((null):27595): Spice-Warning **: reds_stream.c:379:reds_stream_ssl_accept: > SSL_accept failed, error=1 > > I came across some documentation that states in the caveat section "Certificate > of spice SSL should be separate certificate." > https://www.ovirt.org/develop/release-management/features/infra/pki/ > > Is this still the case for version 4? The document references version 3.2 > and 3.3. If so, how do I generate a new certificate for use with spice? > Please let me know if you require further info to troubleshoot, I am happy > to provide it. Many thanks in advance. > > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mhke_aj5566 at yahoo.com Tue Feb 20 08:20:42 2018 From: mhke_aj5566 at yahoo.com (michael pagdanganan) Date: Tue, 20 Feb 2018 08:20:42 +0000 (UTC) Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> <1101288079.1992124.1519108431417@mail.yahoo.com> <1802931026.2010356.1519108987398@mail.yahoo.com> <708667852.2018838.1519110223090@mail.yahoo.com> Message-ID: <155155025.2021384.1519114843901@mail.yahoo.com> Datacenter Host On Tuesday, February 20, 2018 3:35 PM, Eyal Shenitzky wrote: What about the hosts status? Can you please send the hosts screenshot and the datacenter screenshot? On Tue, Feb 20, 2018 at 9:03 AM, michael pagdanganan wrote: Hi, Screenshot only 1 data center but 2 master domain and got event warning On Tuesday, February 20, 2018 2:56 PM, Eyal Shenitzky wrote: Also, can you please describe the current setup of the environment? I'm not sure I understand, do you have 2 data-centers? Please attach some screenshots of the current situation. On Tue, Feb 20, 2018 at 8:43 AM, michael pagdanganan wrote: Sorry can't attached log file it's too big file VDSM.log for node 1 2766', 'lastCheck': '4.9', 'valid': True}} from=internal, task_id=645d456e-f59f-4b1c- 9e97-fc82d19a36b1 (api:52) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:46) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00022988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000272766', 'lastCheck': '6.0', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:52) 2018-02-20 14:38:47,226+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:55,252+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:58,566+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:46) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', 'lastCheck': '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000197535', 'lastCheck': '9.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000191456', 'lastCheck': '9.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00026365', 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=internal, task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:52) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:46) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck': '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000222609', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000211253', 'lastCheck': '1.0', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '1.1', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:52) 2018-02-20 14:39:02,300+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:39:10,270+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,631+0800 INFO? (jsonrpc/0) [vdsm.api] START connectStorageServer(domType= 1, spUUID=u'00000000-0000-0000- 0000-000000000000', conList=[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123', u'connection': u'dev2node1.lares.com.ph:/run/ media/root/Slave1Data/ dataNode1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:46) 2018-02-20 14:39:13,633+0800 INFO? (jsonrpc/0) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123'}]} from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:52) 2018-02-20 14:39:13,634+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StoragePool. connectStorageServer succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,806+0800 INFO? (jsonrpc/7) [vdsm.api] START connectStoragePool(spUUID=u' 5a865884-0366-0330-02b8- 0000000002d4', hostID=1, msdUUID=u'f3e372e3-1251-4195- a4b9-1027e40059df', masterVersion=65, domainsMap={u'e83d0d46-6ea6- 4aa3-80bf-6e95c66b0454': u'active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'active', u'225e1975-8121-4370-b317- 86e964ae326f': u'attached', u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'active', u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'active', u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'active'}, options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=927a5d9a-4304-4776- b5c9-22ba5d0cb853 (api:46) 2018-02-20 14:39:13,807+0800 INFO? (jsonrpc/7) [storage. StoragePoolMemoryBackend] new storage pool master version 65 and domains map {u'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': u'Active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'Active', u'225e1975-8121-4370-b317- 86e964ae326f': u'Attached', u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'Active', u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'Active', u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'Active'} (spbackends:450) VDSM.log 22018-02-20 14:41:14,598+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.5567e-05', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.2433e-05', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000237247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=6213712b-9903-4db8- 9836-3baf85cd63e4 (api:52) 2018-02-20 14:41:18,074+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:24,333+0800 INFO? (jsonrpc/4) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:46) 2018-02-20 14:41:24,334+0800 INFO? (jsonrpc/4) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck': '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000117677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1467e-05', 'lastCheck': '5.8', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:52) 2018-02-20 14:41:24,338+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:27,087+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:29,607+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:46) 2018-02-20 14:41:29,608+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', 'lastCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '8.8773e-05', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.7145e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=internal, task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:52) 2018-02-20 14:41:33,079+0800 INFO? (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:40,456+0800 INFO? (jsonrpc/2) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:46) 2018-02-20 14:41:40,457+0800 INFO? (jsonrpc/2) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '1.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '1.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '1.9', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:52) 2018-02-20 14:41:40,461+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:41:42,106+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] START repoStats(options=None) from=internal, task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:46) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:52) 2018-02-20 14:41:49,083+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) Engine log skId '404ccecc-aa7f-45ea-89e4- 726956269bc9' task status 'finished' 2018-02-20 14:42:21,966+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] spmStart polling ended, spm status: SPM 2018-02-20 14:42:21,967+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] START, HSMClearTaskVDSCommand( HostName = Node1, HSMTaskGuidBaseVDSCommandParam eters:{runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', taskId='404ccecc-aa7f-45ea- 89e4-726956269bc9'}), log id: 71688f70 2018-02-20 14:42:22,922+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f70 2018-02-20 14:42:22,923+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common. businessentities. SpmStatusResult at 78332453, log id: 3ea35d5 2018-02-20 14:42:22,935+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-38) [29528f9] Initialize Irs proxy from vds: dev2node1.lares.com.ph 2018-02-20 14:42:22,951+08 INFO? [org.ovirt.engine.core.dal. dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- thread-38) [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host Node1 (Address: dev2node1.lares.com.ph). 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] -- executeIrsBrokerCommand: Attempting on storage pool '5a865884-0366-0330-02b8- 0000000002d4' 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] START, HSMGetAllTasksInfoVDSCommand( HostName = Node1, VdsIdVDSCommandParametersBase: {runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4'}), log id: 1bdbea9d 2018-02-20 14:42:22,955+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-7) [29528f9] START, SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{ runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', ignoreFailoverLimit='false'}), log id: 5c2422d6 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core.bll. tasks.AsyncTaskManager] (org.ovirt.thread.pool-7- thread-38) [29528f9] Discovered no tasks on Storage Pool 'UnsecuredEnv' 2018-02-20 14:42:24,936+08 INFO? [org.ovirt.vdsm.jsonrpc. client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.ph/10.10. 43.2 2018-02-20 14:42:27,012+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-43) [] Master domain is not in sync between DB and VDSM. Domain Node1Container marked as master in DB and not in the storage 2018-02-20 14:42:27,026+08 WARN? [org.ovirt.engine.core.dal. dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- thread-43) [] EVENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_ SYNC(990), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Sync Error on Master Domain between Host Node1 and oVirt Engine. Domain: Node1Container is marked as Master in oVirt Engine database but not on the Storage side. Please consult with Support on how to fix this issue. 2018-02-20 14:42:27,103+08 INFO? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] Running command: ReconstructMasterDomainCommand internal: true. Entities affected :? ID: f3e372e3-1251-4195-a4b9- 1027e40059df Type: Storage 2018-02-20 14:42:27,137+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. ResetIrsVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] START, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{ runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', ignoreFailoverLimit='false', vdsId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', ignoreStopFailed='true'}), log id: 3e0a239d 2018-02-20 14:42:27,140+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] START, SpmStopVDSCommand(HostName = Node1, SpmStopVDSCommandParameters:{ runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4'}), log id: 7c67bf06 2018-02-20 14:42:28,144+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', pool id '5a865884-0366-0330-02b8- 0000000002d4' On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan wrote: Thanks for quick response, see attachment. On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky wrote: Hi,? Can you please attach full Engine and VDSM logs? Thanks, On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan wrote: My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing for maintenance. When I tried to activate my old master domain (DATANd01 on Node1 ) all? storage domain goes down and up master keep on rotating. Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) Event Error:Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain Stored2 is marked as master in ovirt engine Database but not on storage side Please consult with support VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or it's version: u'SD=f3e372e3-1251-4195-a4b9- 1027e40059df, pool=5a865884-0366-0330-02b8- 0000 VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) Here's logs from engine: ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/messages Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/ovirt-engine/engine. log 2018-02-20 08:01:16,062+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:27,825+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:27,862+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:27,882+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:40,106+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:40,197+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:40,246+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:51,809+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:51,846+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:51,866+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users -- Regards,Eyal Shenitzky -- Regards,Eyal Shenitzky -- Regards,Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 75716 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 69253 bytes Desc: not available URL: From mhke_aj5566 at yahoo.com Tue Feb 20 08:21:10 2018 From: mhke_aj5566 at yahoo.com (michael pagdanganan) Date: Tue, 20 Feb 2018 08:21:10 +0000 (UTC) Subject: [ovirt-users] 2 Master on Storage Pool [Event Error] In-Reply-To: <155155025.2021384.1519114843901@mail.yahoo.com> References: <1021521808.1848498.1519088345208.ref@mail.yahoo.com> <1021521808.1848498.1519088345208@mail.yahoo.com> <1101288079.1992124.1519108431417@mail.yahoo.com> <1802931026.2010356.1519108987398@mail.yahoo.com> <708667852.2018838.1519110223090@mail.yahoo.com> <155155025.2021384.1519114843901@mail.yahoo.com> Message-ID: <316250442.2040173.1519114872623@mail.yahoo.com> Host On Tuesday, February 20, 2018 4:20 PM, michael pagdanganan wrote: Datacenter Host On Tuesday, February 20, 2018 3:35 PM, Eyal Shenitzky wrote: What about the hosts status? Can you please send the hosts screenshot and the datacenter screenshot? On Tue, Feb 20, 2018 at 9:03 AM, michael pagdanganan wrote: Hi, Screenshot only 1 data center but 2 master domain and got event warning On Tuesday, February 20, 2018 2:56 PM, Eyal Shenitzky wrote: Also, can you please describe the current setup of the environment? I'm not sure I understand, do you have 2 data-centers? Please attach some screenshots of the current situation. On Tue, Feb 20, 2018 at 8:43 AM, michael pagdanganan wrote: Sorry can't attached log file it's too big file VDSM.log for node 1 2766', 'lastCheck': '4.9', 'valid': True}} from=internal, task_id=645d456e-f59f-4b1c- 9e97-fc82d19a36b1 (api:52) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:46) 2018-02-20 14:38:47,222+0800 INFO? (jsonrpc/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000119109', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000195263', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.00022988', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000270206', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000272766', 'lastCheck': '6.0', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=3b2e802e, task_id=28c795d1-1639-4e68- a9fe-a00006be268f (api:52) 2018-02-20 14:38:47,226+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:55,252+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:38:58,566+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:46) 2018-02-20 14:39:01,093+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.3888e-05', 'lastCheck': '9.9', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000197535', 'lastCheck': '9.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000191456', 'lastCheck': '9.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00026365', 'lastCheck': '9.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '0.0', 'valid': True}} from=internal, task_id=04f55ded-5841-44ae- a376-4f6e723e4b10 (api:52) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:46) 2018-02-20 14:39:02,295+0800 INFO? (jsonrpc/6) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.4759e-05', 'lastCheck': '1.1', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000183158', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000222609', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000211253', 'lastCheck': '1.0', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000243658', 'lastCheck': '1.1', 'valid': True}} from=::ffff:10.10.43.1,60554, flow_id=13149d6e, task_id=5f19d76a-d343-4dc4- ad09-024ec27f7443 (api:52) 2018-02-20 14:39:02,300+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:39:10,270+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,631+0800 INFO? (jsonrpc/0) [vdsm.api] START connectStorageServer(domType= 1, spUUID=u'00000000-0000-0000- 0000-000000000000', conList=[{u'id': u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123', u'connection': u'dev2node1.lares.com.ph:/run/ media/root/Slave1Data/ dataNode1', u'iqn': u'', u'user': u'', u'tpgt': u'1', u'protocol_version': u'auto', u'password': '********', u'port': u''}], options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:46) 2018-02-20 14:39:13,633+0800 INFO? (jsonrpc/0) [vdsm.api] FINISH connectStorageServer return={'statuslist': [{'status': 0, 'id': u'5e4c94f1-f3b9-4fbd-a6c7- e732d0fe3123'}]} from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=bfce1e70-ef4a-4e13- aaaa-7a66aaf44429 (api:52) 2018-02-20 14:39:13,634+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call StoragePool. connectStorageServer succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:39:13,806+0800 INFO? (jsonrpc/7) [vdsm.api] START connectStoragePool(spUUID=u' 5a865884-0366-0330-02b8- 0000000002d4', hostID=1, msdUUID=u'f3e372e3-1251-4195- a4b9-1027e40059df', masterVersion=65, domainsMap={u'e83d0d46-6ea6- 4aa3-80bf-6e95c66b0454': u'active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'active', u'225e1975-8121-4370-b317- 86e964ae326f': u'attached', u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'active', u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'active', u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'active'}, options=None) from=::ffff:10.10.43.1,60554, flow_id=15b57417, task_id=927a5d9a-4304-4776- b5c9-22ba5d0cb853 (api:46) 2018-02-20 14:39:13,807+0800 INFO? (jsonrpc/7) [storage. StoragePoolMemoryBackend] new storage pool master version 65 and domains map {u'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': u'Active', u'4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': u'Active', u'225e1975-8121-4370-b317- 86e964ae326f': u'Attached', u'f3e372e3-1251-4195-a4b9- 1027e40059df': u'Active', u'65ca2e2d-b472-4bee-85b4- 09a161464b20': u'Active', u'42e591b7-f86c-4b67-a3d2- 40cc007f7662': u'Active'} (spbackends:450) VDSM.log 22018-02-20 14:41:14,598+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000252423', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.5567e-05', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.2433e-05', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000237247', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '8.4376e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=6213712b-9903-4db8- 9836-3baf85cd63e4 (api:52) 2018-02-20 14:41:18,074+0800 INFO? (jsonrpc/6) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:24,333+0800 INFO? (jsonrpc/4) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:46) 2018-02-20 14:41:24,334+0800 INFO? (jsonrpc/4) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000172953', 'lastCheck': '5.7', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '7.701e-05', 'lastCheck': '5.7', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000117677', 'lastCheck': '5.7', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000185146', 'lastCheck': '5.7', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1467e-05', 'lastCheck': '5.8', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=73f86113, task_id=e8231ecb-3543-4d8f- af54-4cf2b06ee98a (api:52) 2018-02-20 14:41:24,338+0800 INFO? (jsonrpc/4) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:27,087+0800 INFO? (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:29,607+0800 INFO? (periodic/1) [vdsm.api] START repoStats(options=None) from=internal, task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:46) 2018-02-20 14:41:29,608+0800 INFO? (periodic/1) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000246237', 'lastCheck': '1.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '8.8773e-05', 'lastCheck': '1.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.7145e-05', 'lastCheck': '1.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000218729', 'lastCheck': '0.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.1035e-05', 'lastCheck': '1.0', 'valid': True}} from=internal, task_id=f14b7aab-b64a-4903- 9368-d665e39b49d1 (api:52) 2018-02-20 14:41:33,079+0800 INFO? (jsonrpc/7) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:40,456+0800 INFO? (jsonrpc/2) [vdsm.api] START repoStats(options=None) from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:46) 2018-02-20 14:41:40,457+0800 INFO? (jsonrpc/2) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '1.8', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '1.8', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '1.8', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '1.8', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '1.9', 'valid': True}} from=::ffff:10.10.43.1,56540, flow_id=3be8150c, task_id=1b0c86a8-6fd8-4882- a742-fbd56ccb4037 (api:52) 2018-02-20 14:41:40,461+0800 INFO? (jsonrpc/2) [jsonrpc.JsonRpcServer] RPC call Host.getStats succeeded in 0.01 seconds (__init__:539) 2018-02-20 14:41:42,106+0800 INFO? (jsonrpc/3) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] START repoStats(options=None) from=internal, task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:46) 2018-02-20 14:41:44,622+0800 INFO? (periodic/3) [vdsm.api] FINISH repoStats return={'f3e372e3-1251-4195- a4b9-1027e40059df': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000174027', 'lastCheck': '6.0', 'valid': True}, 'e83d0d46-6ea6-4aa3-80bf- 6e95c66b0454': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.00011454', 'lastCheck': '6.0', 'valid': True}, '65ca2e2d-b472-4bee-85b4- 09a161464b20': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '0.000145955', 'lastCheck': '6.0', 'valid': True}, '4bf2ba2f-f57a-4d9f-b42a- fb78f440a358': {'code': 0, 'actual': True, 'version': 0, 'acquired': True, 'delay': '0.000209871', 'lastCheck': '5.9', 'valid': True}, '42e591b7-f86c-4b67-a3d2- 40cc007f7662': {'code': 0, 'actual': True, 'version': 4, 'acquired': True, 'delay': '9.5744e-05', 'lastCheck': '6.0', 'valid': True}} from=internal, task_id=20e0b29b-d3cd-4e44- b92c-213f9c984ab2 (api:52) 2018-02-20 14:41:49,083+0800 INFO? (jsonrpc/1) [jsonrpc.JsonRpcServer] RPC call Host.getAllVmStats succeeded in 0.00 seconds (__init__:539) Engine log skId '404ccecc-aa7f-45ea-89e4- 726956269bc9' task status 'finished' 2018-02-20 14:42:21,966+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] spmStart polling ended, spm status: SPM 2018-02-20 14:42:21,967+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] START, HSMClearTaskVDSCommand( HostName = Node1, HSMTaskGuidBaseVDSCommandParam eters:{runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', taskId='404ccecc-aa7f-45ea- 89e4-726956269bc9'}), log id: 71688f70 2018-02-20 14:42:22,922+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMClearTaskVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, HSMClearTaskVDSCommand, log id: 71688f70 2018-02-20 14:42:22,923+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStartVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, SpmStartVDSCommand, return: org.ovirt.engine.core.common. businessentities. SpmStatusResult at 78332453, log id: 3ea35d5 2018-02-20 14:42:22,935+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-38) [29528f9] Initialize Irs proxy from vds: dev2node1.lares.com.ph 2018-02-20 14:42:22,951+08 INFO? [org.ovirt.engine.core.dal. dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- thread-38) [29528f9] EVENT_ID: IRS_HOSTED_ON_VDS(204), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Pool Manager runs on Host Node1 (Address: dev2node1.lares.com.ph). 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] -- executeIrsBrokerCommand: Attempting on storage pool '5a865884-0366-0330-02b8- 0000000002d4' 2018-02-20 14:42:22,952+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] START, HSMGetAllTasksInfoVDSCommand( HostName = Node1, VdsIdVDSCommandParametersBase: {runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4'}), log id: 1bdbea9d 2018-02-20 14:42:22,955+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-7) [29528f9] START, SPMGetAllTasksInfoVDSCommand( IrsBaseVDSCommandParameters:{ runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', ignoreFailoverLimit='false'}), log id: 5c2422d6 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. HSMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, HSMGetAllTasksInfoVDSCommand, return: [], log id: 1bdbea9d 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. SPMGetAllTasksInfoVDSCommand] (org.ovirt.thread.pool-7- thread-38) [29528f9] FINISH, SPMGetAllTasksInfoVDSCommand, return: [], log id: 58cbe7b7 2018-02-20 14:42:23,956+08 INFO? [org.ovirt.engine.core.bll. tasks.AsyncTaskManager] (org.ovirt.thread.pool-7- thread-38) [29528f9] Discovered no tasks on Storage Pool 'UnsecuredEnv' 2018-02-20 14:42:24,936+08 INFO? [org.ovirt.vdsm.jsonrpc. client.reactors.ReactorClient] (SSL Stomp Reactor) [] Connecting to dev2node1.lares.com.ph/10.10. 43.2 2018-02-20 14:42:27,012+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-43) [] Master domain is not in sync between DB and VDSM. Domain Node1Container marked as master in DB and not in the storage 2018-02-20 14:42:27,026+08 WARN? [org.ovirt.engine.core.dal. dbbroker.auditloghandling. AuditLogDirector] (org.ovirt.thread.pool-7- thread-43) [] EVENT_ID: SYSTEM_MASTER_DOMAIN_NOT_IN_ SYNC(990), Correlation ID: null, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Sync Error on Master Domain between Host Node1 and oVirt Engine. Domain: Node1Container is marked as Master in oVirt Engine database but not on the Storage side. Please consult with Support on how to fix this issue. 2018-02-20 14:42:27,103+08 INFO? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] Running command: ReconstructMasterDomainCommand internal: true. Entities affected :? ID: f3e372e3-1251-4195-a4b9- 1027e40059df Type: Storage 2018-02-20 14:42:27,137+08 INFO? [org.ovirt.engine.core. vdsbroker.irsbroker. ResetIrsVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] START, ResetIrsVDSCommand( ResetIrsVDSCommandParameters:{ runAsync='true', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4', ignoreFailoverLimit='false', vdsId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', ignoreStopFailed='true'}), log id: 3e0a239d 2018-02-20 14:42:27,140+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] START, SpmStopVDSCommand(HostName = Node1, SpmStopVDSCommandParameters:{ runAsync='true', hostId='7dee35bb-8c97-4f6a- b6cd-abc4258540e4', storagePoolId='5a865884-0366- 0330-02b8-0000000002d4'}), log id: 7c67bf06 2018-02-20 14:42:28,144+08 INFO? [org.ovirt.engine.core. vdsbroker.vdsbroker. SpmStopVDSCommand] (org.ovirt.thread.pool-7- thread-43) [3e5965ca] SpmStopVDSCommand::Stopping SPM on vds 'Node1', pool id '5a865884-0366-0330-02b8- 0000000002d4' On Tuesday, February 20, 2018 2:33 PM, michael pagdanganan wrote: Thanks for quick response, see attachment. On Tuesday, February 20, 2018 2:10 PM, Eyal Shenitzky wrote: Hi,? Can you please attach full Engine and VDSM logs? Thanks, On Tue, Feb 20, 2018 at 2:59 AM, michael pagdanganan wrote: My storage pool has 2 Master Domain(Stored2 on Node2,Node1Container on Node1). and my old master domain (DATANd01 on Node1 ) hung on preparing for maintenance. When I tried to activate my old master domain (DATANd01 on Node1 ) all? storage domain goes down and up master keep on rotating. Ovirt Version (oVirt Engine Version: 4.1.9.1.el7.centos) Event Error:Sync Error on Master Domain between Host Node2 and oVirt Engine. Domain Stored2 is marked as master in ovirt engine Database but not on storage side Please consult with support VDSM Node2 command ConnectStoragePoolVDS failed: Wrong Master domain or it's version: u'SD=f3e372e3-1251-4195-a4b9- 1027e40059df, pool=5a865884-0366-0330-02b8- 0000 VDSM Node2 command HSMGetAllTastsStatusesVDS failed: Not SPM: () Failed to deactivate Storage Domain DATANd01 (Data Center UnsecuredEnv) Here's logs from engine: ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/messages Feb 20 07:01:01 dev2engine systemd: Starting Session 20 of user root. Feb 20 07:01:01 dev2engine systemd: Removed slice User Slice of root. Feb 20 07:01:01 dev2engine systemd: Stopping User Slice of root. Feb 20 07:58:52 dev2engine systemd: Created slice User Slice of root. Feb 20 07:58:52 dev2engine systemd: Starting User Slice of root. Feb 20 07:58:52 dev2engine systemd-logind: New session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Started Session 21 of user root. Feb 20 07:58:52 dev2engine systemd: Starting Session 21 of user root. Feb 20 08:01:01 dev2engine systemd: Started Session 22 of user root. Feb 20 08:01:01 dev2engine systemd: Starting Session 22 of user root. ------------------------------ ------------------------------ ------------------------------ -------------[root at dev2engine ~]# tail /var/log/ovirt-engine/engine. log 2018-02-20 08:01:16,062+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-32) [102e9d3c] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:27,825+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-23) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:27,862+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-23) [213f42b9] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:27,882+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-20) [929330e] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:40,106+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-17) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:40,197+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-17) [7af552c1] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:40,246+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-22) [73673040] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue 2018-02-20 08:01:51,809+08 WARN? [org.ovirt.engine.core. vdsbroker.irsbroker.IrsProxy] (org.ovirt.thread.pool-7- thread-26) [] Master domain is not in sync between DB and VDSM. Domain Stored2 marked as master in DB and not in the storage 2018-02-20 08:01:51,846+08 WARN? [org.ovirt.engine.core.bll. storage.pool. ReconstructMasterDomainCommand ] (org.ovirt.thread.pool-7- thread-26) [20307cbe] Validation of action 'ReconstructMasterDomain' failed for user SYSTEM. Reasons: VAR__ACTION__RECONSTRUCT_ MASTER,VAR__TYPE__STORAGE__ DOMAIN,ACTION_TYPE_FAILED_ STORAGE_DOMAIN_STATUS_ ILLEGAL2,$status PreparingForMaintenance 2018-02-20 08:01:51,866+08 INFO? [org.ovirt.engine.core.bll. eventqueue.EventQueueMonitor] (org.ovirt.thread.pool-7- thread-49) [2c11a866] Finished reconstruct for pool '5a865884-0366-0330-02b8- 0000000002d4'. Clearing event queue ______________________________ _________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/ mailman/listinfo/users -- Regards,Eyal Shenitzky -- Regards,Eyal Shenitzky -- Regards,Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 75716 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 69253 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 41793 bytes Desc: not available URL: From karli at inparadise.se Tue Feb 20 08:56:26 2018 From: karli at inparadise.se (Karli =?ISO-8859-1?Q?Sj=F6berg?=) Date: Tue, 20 Feb 2018 09:56:26 +0100 Subject: [ovirt-users] Spice Client Connection Issues Using aSpice In-Reply-To: References: Message-ID: <1519116986.1980.6.camel@inparadise.se> On Tue, 2018-02-20 at 08:59 +0100, Tomas Jelinek wrote: > > > On Mon, Feb 19, 2018 at 7:10 PM, Jeremy Tourville otmail.com> wrote: > > Hi Tomas, > > To answer your question, yes I am really trying to use aSpice. > > > > I appreciate your suggestion. I'm not sure if it meets my > > objective. Maybe our goals are different? It seems to me that > > movirt is built around portable management of the ovirt > > environment. I am attempting to provide a VDI type experience for > > running a vm. My goal is to run a lab environment with 30 > > chromebooks loaded with a spice clent. The spice client would of > > course connect to the 30 vms running Kali and each session would be > > independent of each other. > > > > yes, it looks like a different use case > > > I did a little further testing with a different client. (spice > > plugin for chrome). When I attempted to connect using that client > > I got a slightly different error message. The message still seemed > > to be of the same nature- i.e.: there is a problem with SSL > > protocol and communication. > > > > Are you suggesting that movirt can help set up the proper > > certficates and config the vms to use spice? Thanks! > > > > moVirt has been developed for quite some time and works pretty well, > this is why I recommended it. But anyway, you have a different use > case. > > What I think the issue is, is that oVirt can have different CAs set > for console communication and for API. And I think you are trying to > configure aSPICE to use the one for API. > > What moVirt does to make sure it is using the correct CA to put into > the aSPICE is that it downloads the .vv file of the VM (e.g. you can > just connect to console using webadmin and save the .vv file > somewhere), parse it and use the CA= part from it as a certificate. > This one is guaranteed to be the correct one. > > For more details about what else it takes from the .vv file you can > check here: > the parsing: https://github.com/oVirt/moVirt/blob/master/moVirt/src/m > ain/java/org/ovirt/mobile/movirt/rest/client/httpconverter/VvFileHttp > MessageConverter.java > configuration of aSPICE: https://github.com/oVirt/moVirt/blob/master/ > moVirt/src/main/java/org/ovirt/mobile/movirt/util/ConsoleHelper.java > > enjoy :) Feels to me like OP should try to get it working _any_ "normal" way before trying to get the special use case application working? Like trying to run before learning to crawl, if that makes sense? I would suggest just logging in to webadmin with a regular PC and trying to get a SPICE console with remote-viewer to begin with. Then, once that works, try to get a SPICE console working through moVirt with aSPICE on an Android phone, or one of the Chromebooks you have to play with before going into production. Once that?s settled and you know it should work the way you normally access it, you can start playing with your special use case application. Hope it helps! /K > > > > > From: Tomas Jelinek > > Sent: Monday, February 19, 2018 4:19 AM > > To: Jeremy Tourville > > Cc: users at ovirt.org > > Subject: Re: [ovirt-users] Spice Client Connection Issues Using > > aSpice > > > > > > > > On Sun, Feb 18, 2018 at 5:32 PM, Jeremy Tourville > @hotmail.com> wrote: > > > Hello, > > > I am having trouble connecting to my guest vm (Kali Linux) which > > > is running spice. My engine is running version: 4.2.1.7- > > > 1.el7.centos. > > > I am using oVirt Node as my host running version: 4.2.1.1. > > > > > > I have taken the following steps to try and get everything > > > running properly. > > > Download the root CA certificate https://ovirtengine.lan/ovirt-en > > > gine/services/pki-resource?resource=ca-certificate&format=X509- > > > PEM-CA > > > Edit the vm and define the graphical console entries. Video type > > > is set to QXL, Graphics protocol is spice, USB support is > > > enabled. > > > Install the guest agent in Debian per the instructions here - htt > > > ps://www.ovirt.org/documentation/how-to/guest-agent/install-the- > > > guest-agent-in-debian/ It is my understanding that installing > > > the guest agent will also install the virt IO device drivers. > > > Install the spice-vdagent per the instructions here - https://www > > > .ovirt.org/documentation/how-to/guest-agent/install-the-spice- > > > guest-agent/ > > > On the aSpice client I have imported the CA certficate from step > > > 1 above. I defined the connection using the IP of my Node and > > > TLS port 5901. > > > > are you really using aSPICE client (e.g. the android SPICE > > client?). If yes, maybe you want to try to open it using moVirt (ht > > tps://play.google.com/store/apps/details?id=org.ovirt.mobile.movirt > > &hl=en) which delegates the console to aSPICE but configures > > everything including the certificates on it. Should be much simpler > > than configuring it by hand.. > > > > > To troubleshoot my connection issues I confirmed the port being > > > used to listen. > > > virsh # domdisplay Kali > > > spice://172.30.42.12?tls-port=5901 > > > > > > I see the following when attempting to connect. > > > tail -f /var/log/libvirt/qemu/Kali.log > > > > > > 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 > > > alert internal error:s3_pkt.c:1493:SSL alert number 80 > > > ((null):27595): Spice-Warning **: > > > reds_stream.c:379:reds_stream_ssl_accept: SSL_accept failed, > > > error=1 > > > > > > I came across some documentation that states in the caveat > > > section "Certificate of spice SSL should be separate > > > certificate." > > > https://www.ovirt.org/develop/release-management/features/infra/p > > > ki/ > > > > > > Is this still the case for version 4? The document references > > > version 3.2 and 3.3. If so, how do I generate a new certificate > > > for use with spice? Please let me know if you require further > > > info to troubleshoot, I am happy to provide it. Many thanks in > > > advance. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Users mailing list > > > Users at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: This is a digitally signed message part URL: From Robert.Langley at ventura.org Tue Feb 20 10:17:39 2018 From: Robert.Langley at ventura.org (Langley, Robert) Date: Tue, 20 Feb 2018 10:17:39 +0000 Subject: [ovirt-users] Cannot delete auto-generated snapshot Message-ID: I was moving some virtual disks from one storage server to another. Now, I have a couple servers that have the auto-generated snapshot, without disks, and I cannot delete them. The VM will not start and there is the complaint that the disks are illegal. Any help would be appreciated. I'm going to bed for now, but will try to wake up earlier. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Tue Feb 20 10:49:53 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Tue, 20 Feb 2018 12:49:53 +0200 Subject: [ovirt-users] Cannot delete auto-generated snapshot In-Reply-To: References: Message-ID: Hey Robert, Can you please attach the VDSM and Engine log? Also, please write the version of the engine you are working with. On Tue, Feb 20, 2018 at 12:17 PM, Langley, Robert < Robert.Langley at ventura.org> wrote: > I was moving some virtual disks from one storage server to another. Now, I > have a couple servers that have the auto-generated snapshot, without disks, > and I cannot delete them. The VM will not start and there is the complaint > that the disks are illegal. > > Any help would be appreciated. I'm going to bed for now, but will try to > wake up earlier. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Vrgotic at activevideo.com Tue Feb 20 10:34:58 2018 From: M.Vrgotic at activevideo.com (Vrgotic, Marko) Date: Tue, 20 Feb 2018 10:34:58 +0000 Subject: [ovirt-users] 4.2 VM Portal -Create- VM section issue In-Reply-To: References: <06DC6162-4F48-4F6A-82F4-035A12715C64@ictv.com> Message-ID: Hi Tomas, You are more than welcome. I will patiently ? wait for the fix. Kind regards, Marko From: Tomas Jelinek Date: Tuesday, 20 February 2018 at 08:43 To: "Vrgotic, Marko" Cc: users Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue Hey Marko, thank you for all the logs, they helped me to understand the issue! We have hit something similar and started to fix it (wrongly :) ), so I explained the real problem here https://github.com/oVirt/ovirt-web-ui/pull/494/files#diff-3a773596665964819aa579d52d9feb94 so this PR will fix the actual issue you are facing. thank you, Tomas On Sat, Feb 17, 2018 at 8:11 AM, Vrgotic, Marko > wrote: Dear Tomas, In addition to previous email, find attached javascript console output from browser: Kind regards, Marko Vrgotic From: "Vrgotic, Marko" > Date: Thursday, 25 January 2018 at 13:52 To: Tomas Jelinek > Cc: users > Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue Hi Tomas, Thank you. VM does get created, so I think permission are in order: I will attach them in next reply. As soon as possible I will attach all logs related. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo From: "Vrgotic, Marko" > Date: Thursday, 25 January 2018 at 13:18 To: Tomas Jelinek > Cc: users >, "users-request at ovirt.org" > Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue Hi Tomas, Thank you. VM does get created, so I think permission are in order: I will attach them in next reply. As soon as possible I will attach all logs related. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo From: Tomas Jelinek > Date: Thursday, 25 January 2018 at 13:03 To: "Vrgotic, Marko" > Cc: users >, "users-request at ovirt.org" > Subject: Re: [ovirt-users] 4.2 VM Portal -Create- VM section issue On 24 Jan 2018 5:17 p.m., "Vrgotic, Marko" > wrote: Dear oVirt, After setting all parameters for new VM and clicking on ?Create? button, no progress status or that action is accepted is seen from webui. In addition, when closing the add VM section, I am asked if I am sure, due to changes made. Is this expected behaviour? Can something be done about? no, it is not. can you please provide the logs from the javascript console in browser? can you please make sure the user has permissions to create a vm? Kindly awaiting your reply. -- Met vriendelijke groet / Best regards, Marko Vrgotic System Engineer/Customer Care ActiveVideo _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From msteele at telvue.com Tue Feb 20 10:52:18 2018 From: msteele at telvue.com (Mark Steele) Date: Tue, 20 Feb 2018 05:52:18 -0500 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: ?Is it possible that the HostedEngine became corrupted somehow and that is preventing us from adding hosts? Is creating a new hosted engine an option? *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Mon, Feb 19, 2018 at 9:55 AM, Mark Steele wrote: > At this point I'm wondering if there is anyone in the community that > freelances and would be willing to provide remote support to resolve this > issue? > > We are running with 1/2 our normal hosts, and not being able to add > anymore back into the cluster is a serious problem. > > Best regards, > > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > > On Sat, Feb 17, 2018 at 12:53 PM, Mark Steele wrote: > >> Yaniv, >> >> I have one of my developers assisting me and we are continuing to run >> into issues. This is a note from him: >> >> Hi, I'm trying to add a host to ovirt, but I'm running into package >> dependency problems. I have existing hosts that are working and integrated >> properly, and inspecting those, I am able to match the packages between the >> new host and the existing, but when I then try to add the new host to >> ovirt, it fails on reinstall because it's trying to install packages that >> are later versions. does the installation run list from ovirt-release35 >> 002-1 have unspecified versions? The working hosts use libvirt-1.1.1-29, >> and vdsm-4.16.7, but it's trying to install vdsm-4.16.30, which requires a >> higher version of libvirt, at which point, the installation fails. is there >> some way I can specify which package versions the ovirt install procedure >> uses? or better yet, skip the package management step entirely? >> >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> On Sat, Feb 17, 2018 at 2:32 AM, Yaniv Kaul wrote: >> >>> >>> >>> On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele >>> wrote: >>> >>>> We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine >>>> Version: 3.5.0.1-1.el6 >>>> >>> >>> You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , >>> which is a result of a default change of libvirt and was fixed in later >>> versions of oVirt than the one you are using. >>> See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, you >>> can probably configure it manually. >>> Y. >>> >>> >>>> >>>> We have four other hosts that are running this same configuration >>>> already. I took one host out of the cluster (forcefully) that was working >>>> and now it will not add back in either - throwing the same SASL error. >>>> >>>> We are looking at downgrading libvirt as I've seen that somewhere else >>>> - is there another version of RH I should be trying? I have a host I can >>>> put it on. >>>> >>>> >>>> >>>> *** >>>> *Mark Steele* >>>> CIO / VP Technical Operations | TelVue Corporation >>>> TelVue - We Share Your Vision >>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>> >>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>> www.telvue.com >>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>> .com/telvue >>>> >>>> On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: >>>> >>>>> >>>>> >>>>> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >>>>> >>>>> Hello all, >>>>> >>>>> We recently had a network event where we lost access to our storage >>>>> for a period of time. The Cluster basically shut down all our VM's and in >>>>> the process we had three HV's that went offline and would not communicate >>>>> properly with the cluster. >>>>> >>>>> We have since completely reinstalled CentOS on the hosts and attempted >>>>> to install them into the cluster with no joy. We've gotten to the point >>>>> where we generally get an error message in the web gui: >>>>> >>>>> >>>>> Which EL release and which oVirt release are you using? My guess would >>>>> be latest EL, with an older oVirt? >>>>> Y. >>>>> >>>>> >>>>> Stage: Misc Configuration >>>>> Host hv-ausa-02 installation failed. Command returned failure code 1 >>>>> during SSH session 'root at 10.1.90.154'. >>>>> >>>>> the following is what we are seeing in the messages log: >>>>> >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>> authentication failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>> 15231: error : virNetSASLSessionListMechanisms:390 : internal error: >>>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>>> Error -4 in server.c near line 1757) >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>> 15231: error : remoteDispatchAuthSaslInit:3411 : authentication >>>>> failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>>> Input/output error >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>> authentication failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: >>>>> 15233: error : virNetSASLSessionListMechanisms:390 : internal error: >>>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>>> Error -4 in server.c near line 1757) >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>>> 15233: error : remoteDispatchAuthSaslInit:3411 : authentication >>>>> failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>>> Input/output error >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>> authentication failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call >>>>> last): >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line >>>>> 219, in main >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>>>> tool_command[cmd]["command"](*args) >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", >>>>> line 83, in upgrade_networks >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in >>>>> networks >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>>> 159, in get >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>>> 95, in _open_qemu_connection >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, >>>>> timeout=10, sleep=0.2) >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in openAuth >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >>>>> libvirtError('virConnectOpenAuth() failed') >>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >>>>> failed: authentication failed >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control >>>>> process exited, code=exited status=1 >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >>>>> Server Manager network restoration. >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual >>>>> Desktop Server Manager. >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed >>>>> with result 'dependency'. >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered >>>>> failed state. >>>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >>>>> >>>>> Can someone point me in the right direction to resolve this - it seems >>>>> to be a SASL issue perhaps? >>>>> >>>>> *** >>>>> *Mark Steele* >>>>> CIO / VP Technical Operations | TelVue Corporation >>>>> TelVue - We Share Your Vision >>>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>>> >>>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>>> www.telvue.com >>>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>>> .com/telvue >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>> >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sabose at redhat.com Tue Feb 20 10:53:37 2018 From: sabose at redhat.com (Sahina Bose) Date: Tue, 20 Feb 2018 16:23:37 +0530 Subject: [ovirt-users] Ovirt Cluster Setup In-Reply-To: References: Message-ID: On Tue, Feb 20, 2018 at 1:05 PM, Sakhi Hadebe wrote: > I have 3 Dell R515 servers all installed with centOS 7, and trying to > setup an oVirt Cluster. > > Disks configurations: > 2 x 1TB - Raid1 - OS Deployment > 6 x 1TB - Raid 6 - Storage > > ?Memory is 128GB > > I am following this documentation https://www. > ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ > and I am getting the issue below: > > PLAY [gluster_servers] ****************************** > *************************** > > TASK [Run a shell script] ****************************** > ************************ > fatal: [ovirt2.sanren.ac.za]: FAILED! => {"msg": "The conditional check > 'result.rc != 0' failed. The error was: error while evaluating conditional > (result.rc != 0): 'dict object' has no attribute 'rc'"} > fatal: [ovirt3.sanren.ac.za]: FAILED! => {"msg": "The conditional check > 'result.rc != 0' failed. The error was: error while evaluating conditional > (result.rc != 0): 'dict object' has no attribute 'rc'"} > fatal: [ovirt1.sanren.ac.za]: FAILED! => {"msg": "The conditional check > 'result.rc != 0' failed. The error was: error while evaluating conditional > (result.rc != 0): 'dict object' has no attribute 'rc'"} > to retry, use: --limit @/tmp/tmpxFXyGG/run-script.retry > > PLAY RECAP ************************************************************ > ********* > ovirt1.sanren.ac.za : ok=0 changed=0 unreachable=0 > failed=1 > ovirt2.sanren.ac.za : ok=0 changed=0 unreachable=0 > failed=1 > ovirt3.sanren.ac.za : ok=0 changed=0 unreachable=0 > failed=1 > > *Error: Ansible(>= 2.2) is not installed.* > *Some of the features might not work if not installed.* > Can you provide the gdeploy version used, and also the gdeploy.conf ? > > ?I have installed ansible2.4 in all the servers, but the error persists. > > Is there anything I can do to get rid of this error? > -- > Regards, > Sakhi Hadebe > > Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR > > Tel: +27 12 841 2308 <+27128414213> > Fax: +27 12 841 4223 <+27128414223> > Cell: +27 71 331 9622 <+27823034657> > Email: sakhi at sanren.ac.za > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.slezka at slu.cz Tue Feb 20 12:03:48 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Tue, 20 Feb 2018 13:03:48 +0100 Subject: [ovirt-users] problem importing ova vm Message-ID: Hi, I would like to try import some ova files into our oVirt instance [1] [2] but I facing problems. I have downloaded all ova images into one of hosts (ovirt01) into direcory /ova ll /ova/ total 6532872 -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 HAAS-hptelnetd.ova -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova Then I tried to import them - from host ovirt01 and directory /ova but spinner spins infinitly and nothing is happen. I cannot see anything relevant in vdsm log of host ovirt01. In the engine.log of our standalone ovirt manager is just this relevant line 2018-02-20 12:35:04,289+01 INFO [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin [/usr/bin/ansible-playbook, --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, --inventory=/tmp/ansible-inventory8237874608161160784, --extra-vars=ovirt_query_ova_path=/ova, /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net.slu.cz.log] also there are two ansible processes which are still running (and makes heavy load on system (load 9+ and growing, it looks like it eats all the memory and system starts swapping)) ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35 0:41 /usr/bin/python2 /usr/bin/ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory8237874608161160784 --extra-vars=ovirt_query_ova_path=/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35 11:52 /usr/bin/python2 /usr/bin/ansible-playbook --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa --inventory=/tmp/ansible-inventory8237874608161160784 --extra-vars=ovirt_query_ova_path=/ova /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml playbook looks like - hosts: all remote_user: root gather_facts: no roles: - ovirt-ova-query and it looks like it only runs query_ova.py but on all hosts? How does this work? ...or should it work? I am using latest 4.2.1.7-1.el7.centos version Cheers, Jiri Slezka [1] https://haas.cesnet.cz/#!index.md - Cesnet HAAS [2] https://haas.cesnet.cz/downloads/release-01/ - Image repository -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From ahadas at redhat.com Tue Feb 20 12:22:33 2018 From: ahadas at redhat.com (Arik Hadas) Date: Tue, 20 Feb 2018 14:22:33 +0200 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: Message-ID: On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka wrote: > Hi, > Hi Ji??, > > I would like to try import some ova files into our oVirt instance [1] > [2] but I facing problems. > > I have downloaded all ova images into one of hosts (ovirt01) into > direcory /ova > > ll /ova/ > total 6532872 > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 HAAS-hptelnetd.ova > -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova > -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova > -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova > > Then I tried to import them - from host ovirt01 and directory /ova but > spinner spins infinitly and nothing is happen. > And does it work when you provide a path to the actual ova file, i.e., /ova/HAAS-hpdio.ova, rather than to the directory? > > I cannot see anything relevant in vdsm log of host ovirt01. > > In the engine.log of our standalone ovirt manager is just this relevant > line > > 2018-02-20 12:35:04,289+01 INFO > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default > task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > [/usr/bin/ansible-playbook, > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > --inventory=/tmp/ansible-inventory8237874608161160784, > --extra-vars=ovirt_query_ova_path=/ova, > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net.slu.cz.log] > > also there are two ansible processes which are still running (and makes > heavy load on system (load 9+ and growing, it looks like it eats all the > memory and system starts swapping)) > > ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35 0:41 > /usr/bin/python2 /usr/bin/ansible-playbook > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > --inventory=/tmp/ansible-inventory8237874608161160784 > --extra-vars=ovirt_query_ova_path=/ova > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35 11:52 > /usr/bin/python2 /usr/bin/ansible-playbook > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > --inventory=/tmp/ansible-inventory8237874608161160784 > --extra-vars=ovirt_query_ova_path=/ova > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > playbook looks like > > - hosts: all > remote_user: root > gather_facts: no > > roles: > - ovirt-ova-query > > and it looks like it only runs query_ova.py but on all hosts? > No, the engine provides ansible the host to run on when it executes the playbook. It would only be executed on the selected host. > > How does this work? ...or should it work? > It should, especially that part of querying the OVA and is supposed to be really quick. Can you please share the engine log and /var/log/ovirt-engine/ova/ ovirt-query-ova-ansible-20180220123504-ovirt01.net.slu.cz.log ? > > I am using latest 4.2.1.7-1.el7.centos version > > Cheers, > Jiri Slezka > > > [1] https://haas.cesnet.cz/#!index.md - Cesnet HAAS > [2] https://haas.cesnet.cz/downloads/release-01/ - Image repository > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From endre.karlson at gmail.com Tue Feb 20 13:09:08 2018 From: endre.karlson at gmail.com (Endre Karlson) Date: Tue, 20 Feb 2018 14:09:08 +0100 Subject: [ovirt-users] CPU queues on ovirt hosts. Message-ID: Hi guys, is there a way to have CPU queues go down when having a java app on a ovirt hosT ? we have a idm app where the cpu queue is constantly 2-3 when we are doing things with the configuration but on esx on a similar host it is much faster -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Tue Feb 20 13:33:02 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Tue, 20 Feb 2018 14:33:02 +0100 Subject: [ovirt-users] Disk image upload pausing In-Reply-To: References: Message-ID: <20180220133302.E3CD8E1D50@smtp01.mail.de> Hi, Here are lines I have found for my last faulty try : ENGINE 2018-02-19 17:52:27,283+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-3) [1320afb0] Running command: TransferImageStatusCommand internal: true. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER 2018-02-19 17:52:27,290+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-3) [1320afb0] Lock freed to object 'EngineLock:{exclusiveLocks='', sharedLocks='[]'}' 2018-02-19 17:52:28,658+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] (default task-14) [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] Running command: TransferImageStatusCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK with role type USER 2018-02-19 17:52:28,659+01 INFO [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] (default task-14) [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] Updating image transfer 1c55d561-45bf-4e57-b3b6-8fcbf3734a28 (image af5997c5-ae69-4677-9d86-30a978cf83a5) phase to Paused by System (message: 'Sent 405200MB') 2018-02-19 17:52:28,665+01 WARN [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-14) [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] EVENT_ID: UPLOAD_IMAGE_NETWORK_ERROR(1,038), Unable to upload image to disk af5997c5-ae69-4677-9d86-30a978cf83a5 due to a network error. Make sure ovirt-imageio-proxy service is installed and configured, and ovirt-engine's certificate is registered as a valid CA in the browser. The certificate can be fetched from https:///ovirt-engine/services/pki-resource?resource=ca-certificate font-size: 10pt; color: #000000;">PROXY (Thread-4087) ERROR 2018-02-19 17:52:28,644 images:143:root:(make_imaged_request) Failed communicating with host: A Connection error occurred. Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", line 134, in make_imaged_request timeout=timeout, stream=stream) File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 576, in send r = adapter.send(request, **kwargs) File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 415, in send raise ConnectionError(err, request=request) ConnectionError: ('Connection aborted.', error(32, 'Broken pipe')) (Thread-4087) ERROR 2018-02-19 17:52:28,645 web:112:web:(log_error) ERROR [10.100.0.184] PUT /images/f64acb43-d153-485d-b441-9f5d42773a03: [503] Failed communicating with host: A Connection error occurred. (0.01s) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_imageio_common/web.py", line 64, in __call__ resp = self.dispatch(request) File "/usr/lib/python2.7/site-packages/ovirt_imageio_common/web.py", line 91, in dispatch return method(*match.groups()) File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/http_helper.py", line 104, in wrapper return func(self, *args) File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/http_helper.py", line 60, in wrapper ret = func(self, *args) File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", line 97, in put self.request.method, imaged_url, headers, body, stream) File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", line 144, in make_imaged_request raise exc.HTTPServiceUnavailable(s) I saw new updates so I applied them to the whole cluster and the engine VM, and finally rebooted everything. My new try this morning was OK. Maybe one service just needed a restart ? Regards Le 20-Feb-2018 06:54:49 +0100, ishaby at redhat.com a crit: Hi, Can you please attach the engine, vdsm, daemon and proxy logs? Regards, Idan On Mon, Feb 19, 2018 at 11:17 AM, wrote: Hi, I am trying to build a new vm based on a vhd image coming from a windows machine. I converted the image to raw, and I am now trying to import it in the engine. After setting up the CA in my browser, the import process starts but stops after a while with "paused by system" status. I can resume it, but it pauses without transferring more. The engine logs don't explain much, I see a line for the start and the next one for the pause. My network seems to work correctly, and I have plenty of space in the storage domain. What can cause the process to pause ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.slezka at slu.cz Tue Feb 20 13:49:08 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Tue, 20 Feb 2018 14:49:08 +0100 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: Message-ID: Hi Arik, On 02/20/2018 01:22 PM, Arik Hadas wrote: > > > On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka > wrote: > > Hi, > > > Hi Ji??, > ? > > > I would like to try import some ova files into our oVirt instance [1] > [2] but I facing problems. > > I have downloaded all ova images into one of hosts (ovirt01) into > direcory /ova > > ll /ova/ > total 6532872 > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova > -rw-r--r--. 1 vdsm kvm? 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova > -rw-r--r--. 1 vdsm kvm? 891043328 Feb 16 16:23 HAAS-hptelnetd.ova > -rw-r--r--. 1 vdsm kvm? 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova > -rw-r--r--. 1 vdsm kvm? 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova > -rw-r--r--. 1 vdsm kvm? 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova > > Then I tried to import them - from host ovirt01 and directory /ova but > spinner spins infinitly and nothing is happen. > > > And does it work when you provide a path to the actual ova file, i.e., > /ova/HAAS-hpdio.ova, rather than to the directory? this time it ends with "Failed to load VM configuration from OVA file: /ova/HAAS-hpdio.ova" error. > I cannot see anything relevant in vdsm log of host ovirt01. > > In the engine.log of our standalone ovirt manager is just this > relevant line > > 2018-02-20 12:35:04,289+01 INFO > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default > task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > [/usr/bin/ansible-playbook, > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > --inventory=/tmp/ansible-inventory8237874608161160784, > --extra-vars=ovirt_query_ova_path=/ova, > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > .slu.cz.log] > > also there are two ansible processes which are still running (and makes > heavy load on system (load 9+ and growing, it looks like it eats all the > memory and system starts swapping)) > > ovirt? ? 32087? 3.3? 0.0 332252? 5980 ?? ? ? ? Sl? ?12:35? ?0:41 > /usr/bin/python2 /usr/bin/ansible-playbook > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > --inventory=/tmp/ansible-inventory8237874608161160784 > --extra-vars=ovirt_query_ova_path=/ova > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > ovirt? ? 32099 57.5 78.9 15972880 11215312 ?? ?R? ? 12:35? 11:52 > /usr/bin/python2 /usr/bin/ansible-playbook > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > --inventory=/tmp/ansible-inventory8237874608161160784 > --extra-vars=ovirt_query_ova_path=/ova > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > playbook looks like > > - hosts: all > ? remote_user: root > ? gather_facts: no > > ? roles: > ? ? - ovirt-ova-query > > and it looks like it only runs query_ova.py but on all hosts? > > > No, the engine provides ansible the host to run on when it executes the > playbook. > It would only be executed on the selected host. > ? > > > How does this work? ...or should it work? > > > It should, especially that part of querying the OVA and is supposed to > be really quick. > Can you please share the engine log and > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > .slu.cz.log ? engine log is here: https://pastebin.com/nWWM3UUq file /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net in the fact does not exists (nor folder /var/log/ovirt-engine/ova/) Cheers, Jiri Slezka > ? > > > I am using latest 4.2.1.7-1.el7.centos version > > Cheers, > Jiri Slezka > > > [1] https://haas.cesnet.cz/#!index.md > - Cesnet HAAS > [2] https://haas.cesnet.cz/downloads/release-01/ > - Image repository > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From ahadas at redhat.com Tue Feb 20 14:48:02 2018 From: ahadas at redhat.com (Arik Hadas) Date: Tue, 20 Feb 2018 16:48:02 +0200 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: Message-ID: On Tue, Feb 20, 2018 at 3:49 PM, Ji?? Sl??ka wrote: > Hi Arik, > > On 02/20/2018 01:22 PM, Arik Hadas wrote: > > > > > > On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka > > wrote: > > > > Hi, > > > > > > Hi Ji??, > > > > > > > > I would like to try import some ova files into our oVirt instance [1] > > [2] but I facing problems. > > > > I have downloaded all ova images into one of hosts (ovirt01) into > > direcory /ova > > > > ll /ova/ > > total 6532872 > > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf > > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova > > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova > > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 HAAS-hptelnetd.ova > > -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova > > -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova > > -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova > > > > Then I tried to import them - from host ovirt01 and directory /ova > but > > spinner spins infinitly and nothing is happen. > > > > > > And does it work when you provide a path to the actual ova file, i.e., > > /ova/HAAS-hpdio.ova, rather than to the directory? > > this time it ends with "Failed to load VM configuration from OVA file: > /ova/HAAS-hpdio.ova" error. Note that the logic that is applied on a specified folder is "try fetching an 'ova folder' out of the destination folder" rather than "list all the ova files inside the specified folder". It seems that you expected the former output since there are no disks in that folder, right? > > > I cannot see anything relevant in vdsm log of host ovirt01. > > > > In the engine.log of our standalone ovirt manager is just this > > relevant line > > > > 2018-02-20 12:35:04,289+01 INFO > > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] > (default > > task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible > > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > > [/usr/bin/ansible-playbook, > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > > --inventory=/tmp/ansible-inventory8237874608161160784, > > --extra-vars=ovirt_query_ova_path=/ova, > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > > .slu.cz.log] > > > > also there are two ansible processes which are still running (and > makes > > heavy load on system (load 9+ and growing, it looks like it eats all > the > > memory and system starts swapping)) > > > > ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35 0:41 > > /usr/bin/python2 /usr/bin/ansible-playbook > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > > --inventory=/tmp/ansible-inventory8237874608161160784 > > --extra-vars=ovirt_query_ova_path=/ova > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35 11:52 > > /usr/bin/python2 /usr/bin/ansible-playbook > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > > --inventory=/tmp/ansible-inventory8237874608161160784 > > --extra-vars=ovirt_query_ova_path=/ova > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > > > playbook looks like > > > > - hosts: all > > remote_user: root > > gather_facts: no > > > > roles: > > - ovirt-ova-query > > > > and it looks like it only runs query_ova.py but on all hosts? > > > > > > No, the engine provides ansible the host to run on when it executes the > > playbook. > > It would only be executed on the selected host. > > > > > > > > How does this work? ...or should it work? > > > > > > It should, especially that part of querying the OVA and is supposed to > > be really quick. > > Can you please share the engine log and > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > > .slu.cz.log ? > > engine log is here: > > https://pastebin.com/nWWM3UUq Thanks. Alright, so now the configuration is fetched but its processing fails. We fixed many issues in this area recently, but it appears that something is wrong with the actual size of the disk within the ovf file that resides inside this ova file. Can you please share that ovf file that resides inside /ova/HAAS-hpdio.ova? > > > file > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > in the fact does not exists (nor folder /var/log/ovirt-engine/ova/) > This issue is also resolved in 4.2.2. In the meantime, please create the /var/log/ovirt-engine/ova/ folder manually and make sure its permissions match the ones of the other folders in /var/log/ovirt-engine. > > > Cheers, > > Jiri Slezka > > > > > > > > > I am using latest 4.2.1.7-1.el7.centos version > > > > Cheers, > > Jiri Slezka > > > > > > [1] https://haas.cesnet.cz/#!index.md > > - Cesnet HAAS > > [2] https://haas.cesnet.cz/downloads/release-01/ > > - Image repository > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Tue Feb 20 16:24:09 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Tue, 20 Feb 2018 18:24:09 +0200 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: On Tue, Feb 20, 2018 at 12:52 PM, Mark Steele wrote: > ?Is it possible that the HostedEngine became corrupted somehow and that is > preventing us from adding hosts? > I doubt that. I still suspect the libvirt auth. issue. Nevertheless, as commented more than once, you are running on somewhat old version with a recent CentOS version. Not sure this combination is tested or anyone's running it. > > Is creating a new hosted engine an option? > You could backup and restore to a new HE. Y. > > > *** > *Mark Steele* > CIO / VP Technical Operations | TelVue Corporation > TelVue - We Share Your Vision > 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 > > 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// > www.telvue.com > twitter: http://twitter.com/telvue | facebook: https://www. > facebook.com/telvue > > On Mon, Feb 19, 2018 at 9:55 AM, Mark Steele wrote: > >> At this point I'm wondering if there is anyone in the community that >> freelances and would be willing to provide remote support to resolve this >> issue? >> >> We are running with 1/2 our normal hosts, and not being able to add >> anymore back into the cluster is a serious problem. >> >> Best regards, >> >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> >> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> On Sat, Feb 17, 2018 at 12:53 PM, Mark Steele wrote: >> >>> Yaniv, >>> >>> I have one of my developers assisting me and we are continuing to run >>> into issues. This is a note from him: >>> >>> Hi, I'm trying to add a host to ovirt, but I'm running into package >>> dependency problems. I have existing hosts that are working and integrated >>> properly, and inspecting those, I am able to match the packages between the >>> new host and the existing, but when I then try to add the new host to >>> ovirt, it fails on reinstall because it's trying to install packages that >>> are later versions. does the installation run list from ovirt-release35 >>> 002-1 have unspecified versions? The working hosts use libvirt-1.1.1-29, >>> and vdsm-4.16.7, but it's trying to install vdsm-4.16.30, which requires a >>> higher version of libvirt, at which point, the installation fails. is there >>> some way I can specify which package versions the ovirt install procedure >>> uses? or better yet, skip the package management step entirely? >>> >>> >>> *** >>> *Mark Steele* >>> CIO / VP Technical Operations | TelVue Corporation >>> TelVue - We Share Your Vision >>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>> >>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>> www.telvue.com >>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>> .com/telvue >>> >>> On Sat, Feb 17, 2018 at 2:32 AM, Yaniv Kaul wrote: >>> >>>> >>>> >>>> On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele >>>> wrote: >>>> >>>>> We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine >>>>> Version: 3.5.0.1-1.el6 >>>>> >>>> >>>> You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , >>>> which is a result of a default change of libvirt and was fixed in later >>>> versions of oVirt than the one you are using. >>>> See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, >>>> you can probably configure it manually. >>>> Y. >>>> >>>> >>>>> >>>>> We have four other hosts that are running this same configuration >>>>> already. I took one host out of the cluster (forcefully) that was working >>>>> and now it will not add back in either - throwing the same SASL error. >>>>> >>>>> We are looking at downgrading libvirt as I've seen that somewhere else >>>>> - is there another version of RH I should be trying? I have a host I can >>>>> put it on. >>>>> >>>>> >>>>> >>>>> *** >>>>> *Mark Steele* >>>>> CIO / VP Technical Operations | TelVue Corporation >>>>> TelVue - We Share Your Vision >>>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>>> >>>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>>> www.telvue.com >>>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>>> .com/telvue >>>>> >>>>> On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: >>>>> >>>>>> >>>>>> >>>>>> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >>>>>> >>>>>> Hello all, >>>>>> >>>>>> We recently had a network event where we lost access to our storage >>>>>> for a period of time. The Cluster basically shut down all our VM's and in >>>>>> the process we had three HV's that went offline and would not communicate >>>>>> properly with the cluster. >>>>>> >>>>>> We have since completely reinstalled CentOS on the hosts and >>>>>> attempted to install them into the cluster with no joy. We've gotten to the >>>>>> point where we generally get an error message in the web gui: >>>>>> >>>>>> >>>>>> Which EL release and which oVirt release are you using? My guess >>>>>> would be latest EL, with an older oVirt? >>>>>> Y. >>>>>> >>>>>> >>>>>> Stage: Misc Configuration >>>>>> Host hv-ausa-02 installation failed. Command returned failure code 1 >>>>>> during SSH session 'root at 10.1.90.154'. >>>>>> >>>>>> the following is what we are seeing in the messages log: >>>>>> >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>>> authentication failed: authentication failed >>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>>> 15231: error : virNetSASLSessionListMechanisms:390 : internal error: >>>>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>>>> Error -4 in server.c near line 1757) >>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>>> 15231: error : remoteDispatchAuthSaslInit:3411 : authentication >>>>>> failed: authentication failed >>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>>>> Input/output error >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>>> authentication failed: authentication failed >>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: >>>>>> 15233: error : virNetSASLSessionListMechanisms:390 : internal error: >>>>>> cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: Internal >>>>>> Error -4 in server.c near line 1757) >>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>>>> 15233: error : remoteDispatchAuthSaslInit:3411 : authentication >>>>>> failed: authentication failed >>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>>>> Input/output error >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>>> authentication failed: authentication failed >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call >>>>>> last): >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", line >>>>>> 219, in main >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>>>>> tool_command[cmd]["command"](*args) >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>> "/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", >>>>>> line 83, in upgrade_networks >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>> "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in >>>>>> networks >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>>>> 159, in get >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>>>> 95, in _open_qemu_connection >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return utils.retry(libvirtOpen, >>>>>> timeout=10, sleep=0.2) >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>> "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in retry >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in >>>>>> openAuth >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >>>>>> libvirtError('virConnectOpenAuth() failed') >>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >>>>>> failed: authentication failed >>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control >>>>>> process exited, code=exited status=1 >>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >>>>>> Server Manager network restoration. >>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual >>>>>> Desktop Server Manager. >>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed >>>>>> with result 'dependency'. >>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service entered >>>>>> failed state. >>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >>>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >>>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >>>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >>>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >>>>>> >>>>>> Can someone point me in the right direction to resolve this - it >>>>>> seems to be a SASL issue perhaps? >>>>>> >>>>>> *** >>>>>> *Mark Steele* >>>>>> CIO / VP Technical Operations | TelVue Corporation >>>>>> TelVue - We Share Your Vision >>>>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>>>> >>>>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>>>> www.telvue.com >>>>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>>>> .com/telvue >>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Yeun at smiths-detection.com Mon Feb 19 17:43:44 2018 From: Chris.Yeun at smiths-detection.com (Yeun, Chris (DNWK)) Date: Mon, 19 Feb 2018 17:43:44 +0000 Subject: [ovirt-users] Moving VMs to another cluster In-Reply-To: <77361061-2974-43CB-A35E-3D55489E4CFF@redhat.com> References: <1518736436742.88456@smiths-detection.com>, <77361061-2974-43CB-A35E-3D55489E4CFF@redhat.com> Message-ID: <1519062224716.96553@smiths-detection.com> ?Yes, after parsing through the logs...it turned out to be selinux. The destination host had selinux disabled and the source had it as permissive. Set the destination host selinux to permissive, reboot, relabel and I was able to migrate between the clusters. Thanks for your help. ________________________________ From: Michal Skrivanek Sent: Monday, February 19, 2018 7:40 AM To: Yeun, Chris (DNWK) Cc: users at ovirt.org Subject: Re: [ovirt-users] Moving VMs to another cluster On 16 Feb 2018, at 00:13, Yeun, Chris (DNWK) > wrote: ?Hello, How do you move a VM to another cluster within the same data center? I have a cluster running ovirt 3.5 nodes. I created another cluster with hosts running CentOS 7 (ovirt 3.6 version) and want to move VMs to this cluster. The compatibility mode for everything is 3.5. I tried shutting down a VM, but I cannot select the other cluster. that should work, just edit the VM and move to a different cluster. Does it give any reason why you cannot do that? Also live migration fails as well to the new cluster. yeah, that should not work. How exactly does it fail to migrate? I guess you're using the migration dialog and migrate to the new cluster (that's removed/hidden in 4.1) It's going to fail with a specific error/reason (might be missing network or mismatch in cluster settings). Such error would only be in vdsm.log somewhere... Thanks, michal Thanks, Chris ______________________________________________________________________ This email has been scanned by the Boundary Defense for Email Security System. For more information please visit http://www.apptix.com/email-security/antispam-virus ______________________________________________________________________ _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ______________________________________________________________________ This email has been scanned by the Boundary Defense for Email Security System. For more information please visit http://www.apptix.com/email-security/antispam-virus ______________________________________________________________________ ______________________________________________________________________ This email has been scanned by the Boundary Defense for Email Security System. For more information please visit http://www.apptix.com/email-security/antispam-virus ______________________________________________________________________ -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.slezka at slu.cz Tue Feb 20 16:37:40 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Tue, 20 Feb 2018 17:37:40 +0100 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: Message-ID: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> On 02/20/2018 03:48 PM, Arik Hadas wrote: > > > On Tue, Feb 20, 2018 at 3:49 PM, Ji?? Sl??ka > wrote: > > Hi Arik, > > On 02/20/2018 01:22 PM, Arik Hadas wrote: > > > > > > On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka > > >> wrote: > > > >? ? ?Hi, > > > > > > Hi Ji??, > > ? > > > > > >? ? ?I would like to try import some ova files into our oVirt instance [1] > >? ? ?[2] but I facing problems. > > > >? ? ?I have downloaded all ova images into one of hosts (ovirt01) into > >? ? ?direcory /ova > > > >? ? ?ll /ova/ > >? ? ?total 6532872 > >? ? ?-rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 HAAS-hpcowrie.ovf > >? ? ?-rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova > >? ? ?-rw-r--r--. 1 vdsm kvm? 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova > >? ? ?-rw-r--r--. 1 vdsm kvm? 891043328 Feb 16 16:23 HAAS-hptelnetd.ova > >? ? ?-rw-r--r--. 1 vdsm kvm? 908222464 Feb 16 16:23 HAAS-hpuchotcp.ova > >? ? ?-rw-r--r--. 1 vdsm kvm? 880643072 Feb 16 16:24 HAAS-hpuchoudp.ova > >? ? ?-rw-r--r--. 1 vdsm kvm? 890833920 Feb 16 16:24 HAAS-hpuchoweb.ova > > > >? ? ?Then I tried to import them - from host ovirt01 and directory /ova but > >? ? ?spinner spins infinitly and nothing is happen. > > > > > > And does it work when you provide a path to the actual ova file, i.e., > > /ova/HAAS-hpdio.ova, rather than to the directory? > > this time it ends with "Failed to load VM configuration from OVA file: > /ova/HAAS-hpdio.ova" error.? > > > Note that the logic that is applied on a specified folder is "try > fetching an 'ova folder' out of the destination folder" rather than > "list all the ova files inside the specified folder". It seems that you > expected the former output since there are no disks in that folder, right? yes, It would be more user friendly to list all ova files and then select which one to import (like listing all vms in vmware import) Maybe description of path field in manager should be "Path to ova file" instead of "Path" :-) > >? ? ?I cannot see anything relevant in vdsm log of host ovirt01. > > > >? ? ?In the engine.log of our standalone ovirt manager is just this > >? ? ?relevant line > > > >? ? ?2018-02-20 12:35:04,289+01 INFO > >? ? ?[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default > >? ? ?task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible > >? ? ?command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > >? ? ?[/usr/bin/ansible-playbook, > >? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > >? ? ?--inventory=/tmp/ansible-inventory8237874608161160784, > >? ? ?--extra-vars=ovirt_query_ova_path=/ova, > >? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: > >? ? ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > >? ? ? >.slu.cz.log] > > > >? ? ?also there are two ansible processes which are still running > (and makes > >? ? ?heavy load on system (load 9+ and growing, it looks like it > eats all the > >? ? ?memory and system starts swapping)) > > > >? ? ?ovirt? ? 32087? 3.3? 0.0 332252? 5980 ?? ? ? ? Sl? ?12:35? ?0:41 > >? ? ?/usr/bin/python2 /usr/bin/ansible-playbook > >? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > >? ? ?--inventory=/tmp/ansible-inventory8237874608161160784 > >? ? ?--extra-vars=ovirt_query_ova_path=/ova > >? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > >? ? ?ovirt? ? 32099 57.5 78.9 15972880 11215312 ?? ?R? ? 12:35? 11:52 > >? ? ?/usr/bin/python2 /usr/bin/ansible-playbook > >? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > >? ? ?--inventory=/tmp/ansible-inventory8237874608161160784 > >? ? ?--extra-vars=ovirt_query_ova_path=/ova > >? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > > >? ? ?playbook looks like > > > >? ? ?- hosts: all > >? ? ?? remote_user: root > >? ? ?? gather_facts: no > > > >? ? ?? roles: > >? ? ?? ? - ovirt-ova-query > > > >? ? ?and it looks like it only runs query_ova.py but on all hosts? > > > > > > No, the engine provides ansible the host to run on when it > executes the > > playbook. > > It would only be executed on the selected host. > > ? > > > > > >? ? ?How does this work? ...or should it work? > > > > > > It should, especially that part of querying the OVA and is supposed to > > be really quick. > > Can you please share the engine log and > > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > > >.slu.cz.log ? > > engine log is here: > > https://pastebin.com/nWWM3UUq > > > Thanks. > Alright, so now the configuration is fetched but its processing fails. > We fixed many issues in this area recently, but it appears that > something is wrong with the actual size of the disk within the ovf file > that resides inside this ova file. > Can you please share that ovf file that resides inside?/ova/HAAS-hpdio.ova? file HAAS-hpdio.ova HAAS-hpdio.ova: POSIX tar archive (GNU) [root at ovirt01 backup]# tar xvf HAAS-hpdio.ova HAAS-hpdio.ovf HAAS-hpdio-disk001.vmdk file HAAS-hpdio.ovf is here: https://pastebin.com/80qAU0wB > file > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > in the fact does not exists (nor folder /var/log/ovirt-engine/ova/) > > > This issue is also resolved in 4.2.2. > In the meantime, please create the ?/var/log/ovirt-engine/ova/ folder > manually and make sure its permissions match the ones of the other > folders in ?/var/log/ovirt-engine. ok, done. After another try there is this log file /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173005-ovirt01.net.slu.cz.log https://pastebin.com/M5J44qur > Cheers, > > Jiri Slezka > > > ? > > > > > >? ? ?I am using latest 4.2.1.7-1.el7.centos version > > > >? ? ?Cheers, > >? ? ?Jiri Slezka > > > > > >? ? ?[1] https://haas.cesnet.cz/#!index.md > >? ? ? > - Cesnet HAAS > >? ? ?[2] https://haas.cesnet.cz/downloads/release-01/ > > >? ? ? > - Image repository > > > > > >? ? ?_______________________________________________ > >? ? ?Users mailing list > >? ? ?Users at ovirt.org > > > >? ? ?http://lists.ovirt.org/mailman/listinfo/users > > >? ? ? > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From michal.skrivanek at redhat.com Tue Feb 20 17:17:12 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Tue, 20 Feb 2018 18:17:12 +0100 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> <2EA3B6C6-C3E8-45D8-8ED4-4DF0AE97D279@redhat.com> Message-ID: > On 19 Feb 2018, at 23:36, Jason Keltz wrote: > > Hi Michal, > > On 2/15/2018 12:05 PM, Michal Skrivanek wrote: > >>> On 15 Feb 2018, at 16:37, Jason Keltz wrote: >>> >>> On 02/15/2018 08:48 AM, nicolas at devels.es wrote: >>>> Hi, >>>> >>>> We upgraded one of our infrastructures to 4.2.0 recently and since then some of our machines have the "Console" button greyed-out in the Admin UI, like they were disabled. >>>> >>>> I changed their compatibility to 4.2 but with no luck, as they're still disabled. >>>> >>>> Is there a way to know why is that, and how to solve it? >>>> >>>> I'm attaching a screenshot. >>> Hi Nicolas. >>> I had the same problem with most of my VMs after the upgrade from 4.1 to 4.2. >>> See bugzilla here: https://bugzilla.redhat.com/show_bug.cgi?id=1528868 >>> (which admittedly was a mesh of a bunch of different issues that occurred) >> yeah, that?s not a good idea to mix more issues:) >> Seems https://bugzilla.redhat.com/show_bug.cgi?id=1528868#c26 is the last one relevant to the grayed out console problem in this email thread. >> >> it?s also possible to check "VM Devices? subtab and list the graphical devices. If this is the same problem as from Nicolas then it would list cirrus and it would be great if you can confirm the conditionas are similar (i.e. originally a 3.6 VM) > I believe it was originally a 3.6 VM. Is there anywhere I can verify this info? If not, it would be helpful if oVirt kept track of the version that created the VM for cases just like this. Hi, well, we keep the date and who did that, but we can?t really keep all the logs forever. Well, you can if you archive them somewhere, but I guess that?s impractical for such a long time:-D > > VM Device subtab: (no Cirrus) > so this is a screenshot from VM where the button is grayed out when you start it? Hm..it doesn?t look wrong. >> And then - if possible - describe some history of what happened. When was the VM created, when was cluster updated, when the system was upgraded and to what versions. > All I know is that everything was working fine, then I updated to 4.2, updated cluster version, and then most of my consoles were not available. I can't remember if this happened before the cluster upgrade or not. I suspect it was most and not all VMs since some of them had been created later than 3.6, and this was an older one. I only have this one VM left in this state because I had deleted the other VMs and recreated them one at a time... > I will wait to see if you want me to try Vineet's solution of making it headless, Thanks. Can you get engine.log and vdsm log when you attempt to start that VM ? just the relevant part is enough. Thanks, michal >> The before bringing it back up, unchecked headless in the VM >> >> We then had to do a Run-Once which failed >> Then did a normal Run. >> >> Console was available, and all hardware came back fine. >> > ... but I won't try that yet in case you need additional information from the VM first. > > Jason. > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From jas at cse.yorku.ca Tue Feb 20 17:48:07 2018 From: jas at cse.yorku.ca (Jason Keltz) Date: Tue, 20 Feb 2018 12:48:07 -0500 Subject: [ovirt-users] Console button greyed out (4.2) In-Reply-To: References: <7a5730c52e6373cea1e4e2f491fd522b@devels.es> <2EA3B6C6-C3E8-45D8-8ED4-4DF0AE97D279@redhat.com> Message-ID: <8e048936-c0f9-4e65-81ec-2dd7f7814788@cse.yorku.ca> On 02/20/2018 12:17 PM, Michal Skrivanek wrote: >> On 19 Feb 2018, at 23:36, Jason Keltz > > wrote: >> >> Hi Michal, >> >> On 2/15/2018 12:05 PM, Michal Skrivanek wrote: >> >>>> On 15 Feb 2018, at 16:37, Jason Keltz wrote: >>>> >>>> On 02/15/2018 08:48 AM,nicolas at devels.es wrote: >>>>> Hi, >>>>> >>>>> We upgraded one of our infrastructures to 4.2.0 recently and since then some of our machines have the "Console" button greyed-out in the Admin UI, like they were disabled. >>>>> >>>>> I changed their compatibility to 4.2 but with no luck, as they're still disabled. >>>>> >>>>> Is there a way to know why is that, and how to solve it? >>>>> >>>>> I'm attaching a screenshot. >>>> Hi Nicolas. >>>> I had the same problem with most of my VMs after the upgrade from 4.1 to 4.2. >>>> See bugzilla here:https://bugzilla.redhat.com/show_bug.cgi?id=1528868 >>>> (which admittedly was a mesh of a bunch of different issues that occurred) >>> yeah, that?s not a good idea to mix more issues:) >>> Seemshttps://bugzilla.redhat.com/show_bug.cgi?id=1528868#c26 is the last one relevant to the grayed out console problem in this email thread. >>> >>> it?s also possible to check "VM Devices? subtab and list the graphical devices. If this is the same problem as from Nicolas then it would list cirrus and it would be great if you can confirm the conditionas are similar (i.e. originally a 3.6 VM) >> I believe it was originally a 3.6 VM. Is there anywhere I can verify >> this info? If not, it would be helpful if oVirt kept track of the >> version that created the VM for cases just like this. > > Hi, > well, we keep the date and who did that, but we can?t really keep all > the logs forever. Well, you can if you archive them somewhere, but I > guess that?s impractical for such a long time:-D > I wasn't really thinking in terms of logs. I was thinking a database field that tracks the ovirt version that created the VM. >> >> VM Device subtab: (no Cirrus) >> > > so this is a screenshot from VM where the button is grayed out when > you start it? > Hm..it doesn?t look wrong. > Yes. >>> And then - if possible - describe some history of what happened. When was the VM created, when was cluster updated, when the system was upgraded and to what versions. >> All I know is that everything was working fine, then I updated to >> 4.2, updated cluster version, and then most of my consoles were not >> available. I can't remember if this happened before the cluster >> upgrade or not. I suspect it was most and not all VMs since some of >> them had been created later than 3.6, and this was an older one. I >> only have this one VM left in this state because I had deleted the >> other VMs and recreated them one at a time... >> I will wait to see if you want me to try Vineet's solution of making >> it headless, > > Thanks. > Can you get engine.log and vdsm log when you attempt to start that VM > ? just the relevant part is enough. > Sure.. I restarted the VM (called "rs"). engine.log: http://www.eecs.yorku.ca/~jas/ovirt-debug/02202018/engine.log vdsm log: http://www.eecs.yorku.ca/~jas/ovirt-debug/02202018/vdsm.log Jason. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bailey at cs.kent.edu Tue Feb 20 19:16:54 2018 From: bailey at cs.kent.edu (Jeff Bailey) Date: Tue, 20 Feb 2018 14:16:54 -0500 Subject: [ovirt-users] oVirt 4.2 with cheph In-Reply-To: References: Message-ID: Yes, it works fine.? Just configure it as a posix fs.? I'm pretty sure you still need a different storage domain to hold the hosted engine (haven't tried 4.2) but other than that it works just like NFS. On 2/15/2018 6:18 AM, Christoph K?hler wrote: > Hello, > > does someone have experience with cephfs as a vm-storage domain? I think > about that but without any hints... > > Thanks for pointing me... > > From ahadas at redhat.com Tue Feb 20 22:09:40 2018 From: ahadas at redhat.com (Arik Hadas) Date: Wed, 21 Feb 2018 00:09:40 +0200 Subject: [ovirt-users] problem importing ova vm In-Reply-To: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> Message-ID: On Tue, Feb 20, 2018 at 6:37 PM, Ji?? Sl??ka wrote: > On 02/20/2018 03:48 PM, Arik Hadas wrote: > > > > > > On Tue, Feb 20, 2018 at 3:49 PM, Ji?? Sl??ka > > wrote: > > > > Hi Arik, > > > > On 02/20/2018 01:22 PM, Arik Hadas wrote: > > > > > > > > > On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka > > > >> wrote: > > > > > > Hi, > > > > > > > > > Hi Ji??, > > > > > > > > > > > > I would like to try import some ova files into our oVirt > instance [1] > > > [2] but I facing problems. > > > > > > I have downloaded all ova images into one of hosts (ovirt01) > into > > > direcory /ova > > > > > > ll /ova/ > > > total 6532872 > > > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 > HAAS-hpcowrie.ovf > > > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 HAAS-hpdio.ova > > > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 HAAS-hpjdwpd.ova > > > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 > HAAS-hptelnetd.ova > > > -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 > HAAS-hpuchotcp.ova > > > -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 > HAAS-hpuchoudp.ova > > > -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 > HAAS-hpuchoweb.ova > > > > > > Then I tried to import them - from host ovirt01 and directory > /ova but > > > spinner spins infinitly and nothing is happen. > > > > > > > > > And does it work when you provide a path to the actual ova file, > i.e., > > > /ova/HAAS-hpdio.ova, rather than to the directory? > > > > this time it ends with "Failed to load VM configuration from OVA > file: > > /ova/HAAS-hpdio.ova" error. > > > > > > Note that the logic that is applied on a specified folder is "try > > fetching an 'ova folder' out of the destination folder" rather than > > "list all the ova files inside the specified folder". It seems that you > > expected the former output since there are no disks in that folder, > right? > > yes, It would be more user friendly to list all ova files and then > select which one to import (like listing all vms in vmware import) > > Maybe description of path field in manager should be "Path to ova file" > instead of "Path" :-) > Sorry, I obviously meant 'latter' rather than 'former' before.. Yeah, I agree that would be better, at least until listing the OVA files in the folder is implemented (that was the original plan, btw) - could you please file a bug? > > > > I cannot see anything relevant in vdsm log of host ovirt01. > > > > > > In the engine.log of our standalone ovirt manager is just this > > > relevant line > > > > > > 2018-02-20 12:35:04,289+01 INFO > > > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] > (default > > > task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing > Ansible > > > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > > > [/usr/bin/ansible-playbook, > > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > > > --inventory=/tmp/ansible-inventory8237874608161160784, > > > --extra-vars=ovirt_query_ova_path=/ova, > > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] > [Logfile: > > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > > > > > > >.slu.cz.log] > > > > > > also there are two ansible processes which are still running > > (and makes > > > heavy load on system (load 9+ and growing, it looks like it > > eats all the > > > memory and system starts swapping)) > > > > > > ovirt 32087 3.3 0.0 332252 5980 ? Sl 12:35 > 0:41 > > > /usr/bin/python2 /usr/bin/ansible-playbook > > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > > > --inventory=/tmp/ansible-inventory8237874608161160784 > > > --extra-vars=ovirt_query_ova_path=/ova > > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > > ovirt 32099 57.5 78.9 15972880 11215312 ? R 12:35 > 11:52 > > > /usr/bin/python2 /usr/bin/ansible-playbook > > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > > > --inventory=/tmp/ansible-inventory8237874608161160784 > > > --extra-vars=ovirt_query_ova_path=/ova > > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > > > > > > playbook looks like > > > > > > - hosts: all > > > remote_user: root > > > gather_facts: no > > > > > > roles: > > > - ovirt-ova-query > > > > > > and it looks like it only runs query_ova.py but on all hosts? > > > > > > > > > No, the engine provides ansible the host to run on when it > > executes the > > > playbook. > > > It would only be executed on the selected host. > > > > > > > > > > > > How does this work? ...or should it work? > > > > > > > > > It should, especially that part of querying the OVA and is > supposed to > > > be really quick. > > > Can you please share the engine log and > > > > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > > > > > > >.slu.cz.log ? > > > > engine log is here: > > > > https://pastebin.com/nWWM3UUq > > > > > > Thanks. > > Alright, so now the configuration is fetched but its processing fails. > > We fixed many issues in this area recently, but it appears that > > something is wrong with the actual size of the disk within the ovf file > > that resides inside this ova file. > > Can you please share that ovf file that resides > inside /ova/HAAS-hpdio.ova? > > file HAAS-hpdio.ova > HAAS-hpdio.ova: POSIX tar archive (GNU) > > [root at ovirt01 backup]# tar xvf HAAS-hpdio.ova > HAAS-hpdio.ovf > HAAS-hpdio-disk001.vmdk > > file HAAS-hpdio.ovf is here: > > https://pastebin.com/80qAU0wB Thanks again. So that seems to be a VM that was exported from Virtual Box, right? They don't do anything that violates the OVF specification but they do some non-common things that we don't anticipate: First, they don't specify the actual size of the disk and the current code in oVirt relies on that property. There is a workaround for this though: you can extract an OVA file, edit its OVF configuration - adding ovf:populatedSize="X" (and change ovf:capacity as I'll describe next) to the Disk element inside the DiskSection and pack the OVA again (tar cvf > > file > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > > > > in the fact does not exists (nor folder /var/log/ovirt-engine/ova/) > > > > > > This issue is also resolved in 4.2.2. > > In the meantime, please create the /var/log/ovirt-engine/ova/ folder > > manually and make sure its permissions match the ones of the other > > folders in /var/log/ovirt-engine. > > ok, done. After another try there is this log file > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220173005-ovirt01.net.slu.cz.log > > https://pastebin.com/M5J44qur Is it the log of the execution of the ansible playbook that was provided with a path to the /ova folder? I'm interested in that in order to see how comes that its execution never completed. > > > > Cheers, > > > > Jiri Slezka > > > > > > > > > > > > > > I am using latest 4.2.1.7-1.el7.centos version > > > > > > Cheers, > > > Jiri Slezka > > > > > > > > > [1] https://haas.cesnet.cz/#!index.md < > https://haas.cesnet.cz/#!index.md> > > > > > - Cesnet HAAS > > > [2] https://haas.cesnet.cz/downloads/release-01/ > > > > > > > - Image repository > > > > > > > > > _______________________________________________ > > > Users mailing list > > > Users at ovirt.org > > > > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlawrence at squaretrade.com Tue Feb 20 23:04:22 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Tue, 20 Feb 2018 15:04:22 -0800 Subject: [ovirt-users] Reinitializing lockspace Message-ID: Hello, I have a sanlock problem. I don't fully understand the logs, but from what I can gather, messages like this means it ain't working. 2018-02-16 14:51:46 22123 [15036]: s1 renewal error -107 delta_length 0 last_success 22046 2018-02-16 14:51:47 22124 [15036]: 53977885 aio collect RD 0x7fe5040008c0:0x7fe5040008d0:0x7fe518922000 result -107:0 match res 2018-02-16 14:51:47 22124 [15036]: s1 delta_renew read rv -107 offset 0 /rhev/data-center/mnt/glusterSD/sc5-gluster-10g-1.squaretrade.com:ovirt__images/53977885-0887-48d0-a02c-8d9e3faec93c/dom_md/ids I attempted `hosted-engine --reinitialize-lockspace --force`, which didn't appear to do anything, but who knows. I downed everything and and tried `sanlock direct init -s ....`, which caused sanlock to dump core. At this point the only thing I can think of to do is down everything, whack and manually recreate the lease files and try again. I'm worried that that will lose something that the setup did or will otherwise destroy the installation. It looks like this has been done by others[1], but the references I can find are a bit old, so I'm unsure if that is still a valid approach. So, questions: - Will that work? - Is there something I should do instead of that? Thanks, -j [1] https://bugzilla.redhat.com/show_bug.cgi?id=1116469 From jlawrence at squaretrade.com Tue Feb 20 23:24:13 2018 From: jlawrence at squaretrade.com (Jamie Lawrence) Date: Tue, 20 Feb 2018 15:24:13 -0800 Subject: [ovirt-users] 4.2 aaa LDAP setup issue In-Reply-To: References: <776DB316-C6A5-4A64-88CA-88A92AE5F7B7@squaretrade.com> Message-ID: <2AB03D55-BDD7-4162-8B7C-6A74DADC7F47@squaretrade.com> I missed this when you sent it; apologies for the delay. > On Feb 13, 2018, at 12:11 AM, Ondra Machacek wrote: > > Hello, > > On 02/09/2018 08:17 PM, Jamie Lawrence wrote: >> Hello, >> I'm bringing up a new 4.2 cluster and would like to use LDAP auth. Our LDAP servers are fine and function normally for a number of other services, but I can't get this working. >> Our LDAP setup requires startTLS and a login. That last bit seems to be where the trouble is. After ovirt-engine-extension-aaa-ldap-setup asks for the cert and I pass it the path to the same cert used via nslcd/PAM for logging in to the host, it replies: >> [ INFO ] Connecting to LDAP using 'ldap://x.squaretrade.com:389' >> [ INFO ] Executing startTLS >> [WARNING] Cannot connect using 'ldap://x.squaretrade.com:389': {'info': 'authentication required', 'desc': 'Server is unwilling to perform'} >> [ ERROR ] Cannot connect using any of available options >> "Unwilling to perform" makes me think -aaa-ldap-setup is trying something the backend doesn't support, but I'm having trouble guessing what that could be since the tool hasn't gathered sufficient information to connect yet - it asks for a DN/pass later in the script. And the log isn't much more forthcoming. >> I double-checked the cert with openssl; it is a valid, PEM-encoded cert. >> Before I head in to the code, has anyone seen this? > > Looks like you have disallowed anonymous bind on your LDAP. > We are trying to estabilish anonymous bind to test the connection. Ah, I think I forgot that anonymous bind was a thing. > I would recommend to try to do a manual configuration, the documentation > is here: > > https://github.com/oVirt/ovirt-engine-extension-aaa-ldap/blob/master/README#L17 > > Then in your /etc/ovirt-engine/aaa/profile1.properties add following > line: > > pool.default.auth.type = simple Awesome, thanks so much. I really appreciate the pointer. -j From ishaby at redhat.com Wed Feb 21 05:08:44 2018 From: ishaby at redhat.com (Idan Shaby) Date: Wed, 21 Feb 2018 07:08:44 +0200 Subject: [ovirt-users] Disk image upload pausing In-Reply-To: <20180220133302.E3CD8E1D50@smtp01.mail.de> References: <20180220133302.E3CD8E1D50@smtp01.mail.de> Message-ID: On Tue, Feb 20, 2018 at 3:33 PM, wrote: > Hi, > > Here are lines I have found for my last faulty try : > > ENGINE > 2018-02-19 17:52:27,283+01 INFO [org.ovirt.engine.core.bll. > storage.disk.image.TransferImageStatusCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-3) > [1320afb0] Running command: TransferImageStatusCommand internal: true. > Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: > SystemAction group CREATE_DISK with role type USER > 2018-02-19 17:52:27,290+01 INFO [org.ovirt.engine.core.bll. > storage.disk.image.TransferImageStatusCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-3) > [1320afb0] Lock freed to object 'EngineLock:{exclusiveLocks='', > sharedLocks='[]'}' > 2018-02-19 17:52:28,658+01 INFO [org.ovirt.engine.core.bll. > storage.disk.image.TransferImageStatusCommand] (default task-14) > [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] Running command: > TransferImageStatusCommand internal: false. Entities affected : ID: > aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK > with role type USER > 2018-02-19 17:52:28,659+01 INFO [org.ovirt.engine.core.bll. > storage.disk.image.ImageTransferUpdater] (default task-14) > [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] Updating image transfer > 1c55d561-45bf-4e57-b3b6-8fcbf3734a28 (image af5997c5-ae69-4677-9d86-30a978cf83a5) > phase to Paused by System (message: 'Sent 405200MB') > 2018-02-19 17:52:28,665+01 WARN [org.ovirt.engine.core.dal. > dbbroker.auditloghandling.AuditLogDirector] (default task-14) > [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] EVENT_ID: > UPLOAD_IMAGE_NETWORK_ERROR(1,038), Unable to upload image to disk > af5997c5-ae69-4677-9d86-30a978cf83a5 due to a network error. Make sure > ovirt-imageio-proxy service is installed and configured, and ovirt-engine's > certificate is registered as a valid CA in the browser. The certificate can > be fetched from https:///ovirt-engine/services/pki-resource? > resource=ca-certificate&format=X509-PEM-CA > 2018-02-19 17:52:32,624+01 INFO [org.ovirt.engine.core.bll. > storage.disk.image.TransferImageStatusCommand] (default task-28) > [0eba65e6-cca8-46ee-9038-fb29838ead47] Running command: > TransferImageStatusCommand internal: false. Entities affected : ID: > aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK > with role type USER > 2018-02-19 17:52:36,679+01 INFO [org.ovirt.engine.core.bll. > storage.disk.image.TransferImageStatusCommand] (default task-16) > [5d662615-a4e7-412b-8ecf-45be03c7e49f] Running command: > TransferImageStatusCommand internal: false. Entities affected : ID: > aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK > with role type USER > 2018-02-19 17:52:37,304+01 INFO [org.ovirt.engine.core.bll. > storage.disk.image.TransferDiskImageCommand] (EE-ManagedThreadFactory-engineScheduled-Thread-40) > [6f21de96-d3bd-4cf0-b922-394be7389d3b] Transfer was paused by system. > Upload disk 'pfm-serv-pdc_Disk1' (id '00000000-0000-0000-0000- > 000000000000') > > PROXY > (Thread-4087) ERROR 2018-02-19 17:52:28,644 images:143:root:(make_imaged_request) > Failed communicating with host: A Connection error occurred. > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", > line 134, in make_imaged_request > timeout=timeout, stream=stream) > File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 576, > in send > r = adapter.send(request, **kwargs) > File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 415, > in send > raise ConnectionError(err, request=request) > ConnectionError: ('Connection aborted.', error(32, 'Broken pipe')) > (Thread-4087) ERROR 2018-02-19 17:52:28,645 web:112:web:(log_error) ERROR > [10.100.0.184] PUT /images/f64acb43-d153-485d-b441-9f5d42773a03: [503] > Failed communicating with host: A Connection error occurred. (0.01s) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/ovirt_imageio_common/web.py", > line 64, in __call__ > resp = self.dispatch(request) > File "/usr/lib/python2.7/site-packages/ovirt_imageio_common/web.py", > line 91, in dispatch > return method(*match.groups()) > File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/http_helper.py", > line 104, in wrapper > return func(self, *args) > File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/http_helper.py", > line 60, in wrapper > ret = func(self, *args) > File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", > line 97, in put > self.request.method, imaged_url, headers, body, stream) > File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", > line 144, in make_imaged_request > raise exc.HTTPServiceUnavailable(s) > > I saw new updates so I applied them to the whole cluster and the engine > VM, and finally rebooted everything. > > My new try this morning was OK. > > Maybe one service just needed a restart ? > I am not sure. If it happens again, please attach the full logs so we can investigate and understand what happened there. > > Regards > > > > > > Le 20-Feb-2018 06:54:49 +0100, ishaby at redhat.com a ?crit: > > > Hi, > > Can you please attach the engine, vdsm, daemon and proxy logs? > > > Regards, > Idan > > On Mon, Feb 19, 2018 at 11:17 AM, wrote: > >> >> Hi, >> I am trying to build a new vm based on a vhd image coming from a windows >> machine. I converted the image to raw, and I am now trying to import it in >> the engine. >> After setting up the CA in my browser, the import process starts but >> stops after a while with "paused by system" status. I can resume it, but it >> pauses without transferring more. >> The engine logs don't explain much, I see a line for the start and the >> next one for the pause. >> My network seems to work correctly, and I have plenty of space in the >> storage domain. >> What can cause the process to pause ? >> Regards >> >> ------------------------------ >> FreeMail powered by mail.fr >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > ------------------------------ > FreeMail powered by mail.fr > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knarra at redhat.com Wed Feb 21 06:54:33 2018 From: knarra at redhat.com (Kasturi Narra) Date: Wed, 21 Feb 2018 12:24:33 +0530 Subject: [ovirt-users] Ovirt Cluster Setup In-Reply-To: References: Message-ID: Hello sakhi, Can you please let us know what is the script it is failing at ? Thanks kasturi On Tue, Feb 20, 2018 at 1:05 PM, Sakhi Hadebe wrote: > I have 3 Dell R515 servers all installed with centOS 7, and trying to > setup an oVirt Cluster. > > Disks configurations: > 2 x 1TB - Raid1 - OS Deployment > 6 x 1TB - Raid 6 - Storage > > ?Memory is 128GB > > I am following this documentation https://www. > ovirt.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ > and I am getting the issue below: > > PLAY [gluster_servers] ****************************** > *************************** > > TASK [Run a shell script] ****************************** > ************************ > fatal: [ovirt2.sanren.ac.za]: FAILED! => {"msg": "The conditional check > 'result.rc != 0' failed. The error was: error while evaluating conditional > (result.rc != 0): 'dict object' has no attribute 'rc'"} > fatal: [ovirt3.sanren.ac.za]: FAILED! => {"msg": "The conditional check > 'result.rc != 0' failed. The error was: error while evaluating conditional > (result.rc != 0): 'dict object' has no attribute 'rc'"} > fatal: [ovirt1.sanren.ac.za]: FAILED! => {"msg": "The conditional check > 'result.rc != 0' failed. The error was: error while evaluating conditional > (result.rc != 0): 'dict object' has no attribute 'rc'"} > to retry, use: --limit @/tmp/tmpxFXyGG/run-script.retry > > PLAY RECAP ************************************************************ > ********* > ovirt1.sanren.ac.za : ok=0 changed=0 unreachable=0 > failed=1 > ovirt2.sanren.ac.za : ok=0 changed=0 unreachable=0 > failed=1 > ovirt3.sanren.ac.za : ok=0 changed=0 unreachable=0 > failed=1 > > *Error: Ansible(>= 2.2) is not installed.* > *Some of the features might not work if not installed.* > > > ?I have installed ansible2.4 in all the servers, but the error persists. > > Is there anything I can do to get rid of this error? > -- > Regards, > Sakhi Hadebe > > Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR > > Tel: +27 12 841 2308 <+27128414213> > Fax: +27 12 841 4223 <+27128414223> > Cell: +27 71 331 9622 <+27823034657> > Email: sakhi at sanren.ac.za > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Langley at ventura.org Wed Feb 21 07:16:56 2018 From: Robert.Langley at ventura.org (Langley, Robert) Date: Wed, 21 Feb 2018 07:16:56 +0000 Subject: [ovirt-users] Use of virtual disks Message-ID: My first experience with this situation using oVirt. I do come from using VMWare and have been also been using oVirt for several years. We've also paid for RHEV, but migration is held up at the moment. I am trying to prepare for that. My problem is how it does not appear to be so simple to utilize the VM disks. In VMWare it is so simple. Snapshot or not, in vSphere I can take the virtual disk file and use it for another VM when needed. It doesn't make sense to me in oVirt. I have another entry here about my issue that lead to this need, where I am not able to delete snapshot files for those disks I was attempting to live migrate and there was an issue... Now, the empty snapshot files are preventing some VMs from starting. It seems I should be able to take the VM disk files, without the snapshots, and use them with another VM. But, that does not appear possible from what I can tell in oVirt. I desperately need to get one specific VM going. The other two, no worries. I was able to restore from backup, one of the effected VMs. The third is not important at all and can easily by re-created. Is anyone experienced with taking VM disks from one and using them (without snapshots) with another VM? I could really use some sort of workaround. Thanks if anyone can come up with a good answer that would help. -Robert L. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shamamamir2017 at gmail.com Wed Feb 21 07:48:03 2018 From: shamamamir2017 at gmail.com (Shamam Amir) Date: Wed, 21 Feb 2018 10:48:03 +0300 Subject: [ovirt-users] cloud-init issue /IP address lost Message-ID: Hi All, I am using a centos template which I had imported from ovirt-image-repository. Whenever I make a VM from this template and configure its initial run section and run it for the first time, everything goes well. As long as I reboot the server, the VM lost its network configuration (IP, gateway, dns) Then I have to put these parameters again manually. In addition, when I shut down the VM and change its name, the same problem happens again. This actually causes me pretty much long downtime in order to configure the parameters again. Your help is highly appreciated. Best Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjelinek at redhat.com Wed Feb 21 07:55:55 2018 From: tjelinek at redhat.com (Tomas Jelinek) Date: Wed, 21 Feb 2018 08:55:55 +0100 Subject: [ovirt-users] Spice Client Connection Issues Using aSpice In-Reply-To: References: <1519116986.1980.6.camel@inparadise.se> Message-ID: On Wed, Feb 21, 2018 at 2:05 AM, Jeremy Tourville < Jeremy_Tourville at hotmail.com> wrote: > Hello everyone, > > I can confirm that spice is working for me when I launch it using the .vv > file. I have virt viewer installed on my Windows pc and it works without > issue. I can also launch spice when I use movirt without any issues. I > examined the contents of the .vv file to see what the certificate looks > like. I can confirm that the certficate in the .vv file is the same as > the file I downloaded in step 1 of my directions. > > > I reviewed the PKI reference (https://www.ovirt.org/ > develop/release-management/features/infra/pki/) > > for a second time and I see the same certificate located in different > locations. > > > For example, all these locations contain the same certificate- > > - https://ovirtengine.lan/ovirt- > engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA > > - /etc/pki/vdsm/certs/cacert.pem > - /etc/pki/vdsm/libvirt-spice/ca-cert.pem > - /etc/pki/CA/cacert.pem > > This is the certificate I am using to configure my aSpice client. > > Can someone answer the question from my original post? The PKI reference > says for version 3.2 and 3.3. Is the documentation still correct for > version 4.2? > > > At this point I am trying to find out where the problems exists - ie. > > #1 Is my client not configured correctly? > > #2 Am I using the wrong cert? (I think I am using the correct cert based > on the research I listed above) > I'd guess yes based on above > #3 Does my client need to be able to send a pasword? (based on the > contents of the .vv file, I'd have to guess yes) > yes > Also my xml file for the VM in question contains this: > passwdValidTo='1970-01-01T00:00:01'> > Please note: I did not perform any hand configuration of the xml file, it > was all done by the system using the UI. > the password is generated automatically. Normally it works like this: - you ask for the .vv file - ovirt generates a temporary password you can use to connect to console - you can connect to the console using this temporary password > #4 Can I configure a file on the system to turn off ticketing and > passwords and see if that makes a difference, if so, what file? > I don't think there is an easy way to do this... Maybe writing some vdsm hook or some other complex hack. I've seen an old discussion about it here: http://lists.ovirt.org/pipermail/users/2014-August/026774.html but I would not recommend you to go down this path. > #5 Can someone explain this error? > > 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert > internal error:s3_pkt.c:1493:SSL alert number 80 > ((null):27595): Spice-Warning **:reds_stream.c:379:reds_stream_ssl_accept: > SSL_accept failed, error=1 > > What I know about it is this: > According to RFC 2246, the alert number 80 represents an "internal > error". Here is the description from the RFC > internal_error: An internal error unrelated to the peer or the correctness > of the protocol makes it impossible to continue (such as a memory > allocation failure). This message is always fatal. > > #6 Could this error be related to any of #1 through #4 above? > yes, I'd say yes. > > Thanks! > > > ------------------------------ > *From:* Karli Sj?berg > *Sent:* Tuesday, February 20, 2018 2:56 AM > *To:* Tomas Jelinek; Jeremy Tourville > > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] Spice Client Connection Issues Using aSpice > > On Tue, 2018-02-20 at 08:59 +0100, Tomas Jelinek wrote: > > > > > > On Mon, Feb 19, 2018 at 7:10 PM, Jeremy Tourville > otmail.com> wrote: > > > Hi Tomas, > > > To answer your question, yes I am really trying to use aSpice. > > > > > > I appreciate your suggestion. I'm not sure if it meets my > > > objective. Maybe our goals are different? It seems to me that > > > movirt is built around portable management of the ovirt > > > environment. I am attempting to provide a VDI type experience for > > > running a vm. My goal is to run a lab environment with 30 > > > chromebooks loaded with a spice clent. The spice client would of > > > course connect to the 30 vms running Kali and each session would be > > > independent of each other. > > > > > > > yes, it looks like a different use case > > > > > I did a little further testing with a different client. (spice > > > plugin for chrome). When I attempted to connect using that client > > > I got a slightly different error message. The message still seemed > > > to be of the same nature- i.e.: there is a problem with SSL > > > protocol and communication. > > > > > > Are you suggesting that movirt can help set up the proper > > > certficates and config the vms to use spice? Thanks! > > > > > > > moVirt has been developed for quite some time and works pretty well, > > this is why I recommended it. But anyway, you have a different use > > case. > > > > What I think the issue is, is that oVirt can have different CAs set > > for console communication and for API. And I think you are trying to > > configure aSPICE to use the one for API. > > > > What moVirt does to make sure it is using the correct CA to put into > > the aSPICE is that it downloads the .vv file of the VM (e.g. you can > > just connect to console using webadmin and save the .vv file > > somewhere), parse it and use the CA= part from it as a certificate. > > This one is guaranteed to be the correct one. > > > > For more details about what else it takes from the .vv file you can > > check here: > > the parsing: https://github.com/oVirt/moVirt/blob/master/moVirt/src/m > > ain/java/org/ovirt/mobile/movirt/rest/client/httpconverter/VvFileHttp > > MessageConverter.java > > configuration of aSPICE: https://github.com/oVirt/moVirt/blob/master/ > > moVirt/src/main/java/org/ovirt/mobile/movirt/util/ConsoleHelper.java > > > > enjoy :) > > Feels to me like OP should try to get it working _any_ "normal" way > before trying to get the special use case application working? > > Like trying to run before learning to crawl, if that makes sense? > > I would suggest just logging in to webadmin with a regular PC and > trying to get a SPICE console with remote-viewer to begin with. Then, > once that works, try to get a SPICE console working through moVirt with > aSPICE on an Android phone, or one of the Chromebooks you have to play > with before going into production. Once that?s settled and you know it > should work the way you normally access it, you can start playing with > your special use case application. > > Hope it helps! > > /K > > > > > > > > > From: Tomas Jelinek > > > Sent: Monday, February 19, 2018 4:19 AM > > > To: Jeremy Tourville > > > Cc: users at ovirt.org > > > Subject: Re: [ovirt-users] Spice Client Connection Issues Using > > > aSpice > > > > > > > > > > > > On Sun, Feb 18, 2018 at 5:32 PM, Jeremy Tourville > > @hotmail.com> wrote: > > > > Hello, > > > > I am having trouble connecting to my guest vm (Kali Linux) which > > > > is running spice. My engine is running version: 4.2.1.7- > > > > 1.el7.centos. > > > > I am using oVirt Node as my host running version: 4.2.1.1. > > > > > > > > I have taken the following steps to try and get everything > > > > running properly. > > > > Download the root CA certificate https://ovirtengine.lan/ovirt-en > > > > gine/services/pki-resource?resource=ca-certificate&format=X509- > > > > PEM-CA > > > > Edit the vm and define the graphical console entries. Video type > > > > is set to QXL, Graphics protocol is spice, USB support is > > > > enabled. > > > > Install the guest agent in Debian per the instructions here - htt > > > > ps://www.ovirt.org/documentation/how-to/guest-agent/install-the- > > > > guest-agent-in-debian/ It is my understanding that installing > > > > the guest agent will also install the virt IO device drivers. > > > > Install the spice-vdagent per the instructions here - https://www > > > > .ovirt.org/documentation/how-to/guest-agent/install-the-spice- > > > > guest-agent/ > > > > On the aSpice client I have imported the CA certficate from step > > > > 1 above. I defined the connection using the IP of my Node and > > > > TLS port 5901. > > > > > > are you really using aSPICE client (e.g. the android SPICE > > > client?). If yes, maybe you want to try to open it using moVirt (ht > > > tps://play.google.com/store/apps/details?id=org.ovirt.mobile.movirt > > > &hl=en) which delegates the console to aSPICE but configures > > > everything including the certificates on it. Should be much simpler > > > than configuring it by hand.. > > > > > > > To troubleshoot my connection issues I confirmed the port being > > > > used to listen. > > > > virsh # domdisplay Kali > > > > spice://172.30.42.12?tls-port=5901 > > > > > > > > I see the following when attempting to connect. > > > > tail -f /var/log/libvirt/qemu/Kali.log > > > > > > > > 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 > > > > alert internal error:s3_pkt.c:1493:SSL alert number 80 > > > > ((null):27595): Spice-Warning **: > > > > reds_stream.c:379:reds_stream_ssl_accept: SSL_accept failed, > > > > error=1 > > > > > > > > I came across some documentation that states in the caveat > > > > section "Certificate of spice SSL should be separate > > > > certificate." > > > > https://www.ovirt.org/develop/release-management/features/infra/p > > > > ki/ > > > > > > > > Is this still the case for version 4? The document references > > > > version 3.2 and 3.3. If so, how do I generate a new certificate > > > > for use with spice? Please let me know if you require further > > > > info to troubleshoot, I am happy to provide it. Many thanks in > > > > advance. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Users mailing list > > > > Users at ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/users > Users Info Page - lists.ovirt.org Mailing Lists > > lists.ovirt.org > If you have a question about oVirt, this is where you can start getting > answers. To see the collection of prior postings to the list, visit the > Users Archives. > > > > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > Users Info Page - lists.ovirt.org Mailing Lists > > lists.ovirt.org > If you have a question about oVirt, this is where you can start getting > answers. To see the collection of prior postings to the list, visit the > Users Archives. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Wed Feb 21 08:01:57 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Wed, 21 Feb 2018 10:01:57 +0200 Subject: [ovirt-users] Use of virtual disks In-Reply-To: References: Message-ID: Hi Robert, If I understand correctly you are trying to share disks between two VMs, In Ovirt you can set a disk as sharable by editing the disk properties. A shareable disk can be attached to multiple VMs. If you wish to detach a disk from one VM and attach it to another VM it is simple too, select the VM that hold the relevant disk and detach the disk from it, you will find the detached disk under the 'disks' tab. To attach the floating disk to another VM, select the desired VM, go to the disks tab and press 'attach'. You will see all the floating disks that exist in the data-center. I hope it helped, please ask if you have more questions. For more information, you can visit at - https://www.ovirt.org/documentation/admin-guide/chap-Virtual_Machine_Disks/ On Wed, Feb 21, 2018 at 9:16 AM, Langley, Robert wrote: > My first experience with this situation using oVirt. > I do come from using VMWare and have been also been using oVirt for > several years. > We've also paid for RHEV, but migration is held up at the moment. I am > trying to prepare for that. > > My problem is how it does not appear to be so simple to utilize the VM > disks. In VMWare it is so simple. Snapshot or not, in vSphere I can take > the virtual disk file and use it for another VM when needed. It doesn't > make sense to me in oVirt. I have another entry here about my issue that > lead to this need, where I am not able to delete snapshot files for those > disks I was attempting to live migrate and there was an issue... Now, the > empty snapshot files are preventing some VMs from starting. It seems I > should be able to take the VM disk files, without the snapshots, and use > them with another VM. But, that does not appear possible from what I can > tell in oVirt. > > I desperately need to get one specific VM going. The other two, no > worries. I was able to restore from backup, one of the effected VMs. The > third is not important at all and can easily by re-created. > Is anyone experienced with taking VM disks from one and using them > (without snapshots) with another VM? I could really use some sort of > workaround. > > Thanks if anyone can come up with a good answer that would help. > -Robert L. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Wed Feb 21 08:02:58 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 21 Feb 2018 09:02:58 +0100 Subject: [ovirt-users] cloud-init issue /IP address lost In-Reply-To: References: Message-ID: Hello Amir, I had the same issue. I've set as raw command in cloud init scripts the disable of autostart of cloud-init service at boot, so this doesn't bother anymore. Luca Il 21 feb 2018 8:48 AM, "Shamam Amir" ha scritto: > Hi All, > I am using a centos template which I had imported from > ovirt-image-repository. Whenever I make a VM from this template and > configure its initial run section and run it for the first time, everything > goes well. As long as I reboot the server, the VM lost its network > configuration (IP, gateway, dns) Then I have to put these parameters again > manually. In addition, when I shut down the VM and change its name, the > same problem happens again. This actually causes me pretty much long > downtime in order to configure the parameters again. > Your help is highly appreciated. > > > Best Regards > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Wed Feb 21 08:03:35 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 21 Feb 2018 10:03:35 +0200 Subject: [ovirt-users] CPU queues on ovirt hosts. In-Reply-To: References: Message-ID: On Tue, Feb 20, 2018 at 3:09 PM, Endre Karlson wrote: > Hi guys, is there a way to have CPU queues go down when having a java app > on a ovirt hosT ? > > we have a idm app where the cpu queue is constantly 2-3 when we are doing > things with the configuration but on esx on a similar host it is much faster > Your question is not clear to me and lacks a lot of details. I think what you are asking is 'why is application X running faster on ESX?' - am I reading it right? Please provide much needed background information. What version of oVirt and hosts you are using, the configuration of the VM, the type of workload (IO bound? CPU bound? etc.). Is that a Windows (based on the terminology 'CPU queues') or Linux VM? If it's Windows, have you installed all relevant virtio drivers? Is your workload using some kind of random data, perhaps (and then virtio-rng is quite useful to have)? etc. Y. > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Wed Feb 21 08:26:01 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Wed, 21 Feb 2018 08:26:01 +0000 Subject: [ovirt-users] cloud-init issue /IP address lost In-Reply-To: References: Message-ID: An alternative to disabling the service is: touch /etc/cloud/cloud-init.disabled On 21 February 2018 at 08:02, Luca 'remix_tj' Lorenzetto < lorenzetto.luca at gmail.com> wrote: > Hello Amir, > > I had the same issue. I've set as raw command in cloud init scripts the > disable of autostart of cloud-init service at boot, so this doesn't bother > anymore. > > Luca > > Il 21 feb 2018 8:48 AM, "Shamam Amir" ha > scritto: > >> Hi All, >> I am using a centos template which I had imported from >> ovirt-image-repository. Whenever I make a VM from this template and >> configure its initial run section and run it for the first time, everything >> goes well. As long as I reboot the server, the VM lost its network >> configuration (IP, gateway, dns) Then I have to put these parameters again >> manually. In addition, when I shut down the VM and change its name, the >> same problem happens again. This actually causes me pretty much long >> downtime in order to configure the parameters again. >> Your help is highly appreciated. >> >> >> Best Regards >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Wed Feb 21 08:38:09 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Wed, 21 Feb 2018 10:38:09 +0200 Subject: [ovirt-users] Cannot delete auto-generated snapshot In-Reply-To: References: Message-ID: Hi Robert, The Auto-generated snapshot created while performing live storage migration of disks (moving disks from one storage domain to another while the VM is up), the Auto-generated snapshot should be removed automatically by the engine when the live migration ends. The log you sent me doesn't contain the disk migration that you described. Please send me some older log so I will be able to investigate if there was a problem. On Tue, Feb 20, 2018 at 6:38 PM, Langley, Robert wrote: > Attached now is the vdsm log from the hypervisor currently hosting the VMs. > > > ------------------------------ > *From:* Eyal Shenitzky > *Sent:* Tuesday, February 20, 2018 2:49 AM > *To:* Langley, Robert > *Cc:* users at ovirt.org > *Subject:* Re: [ovirt-users] Cannot delete auto-generated snapshot > > Hey Robert, > > Can you please attach the VDSM and Engine log? > > Also, please write the version of the engine you are working with. > > > > On Tue, Feb 20, 2018 at 12:17 PM, Langley, Robert < > Robert.Langley at ventura.org> wrote: > > I was moving some virtual disks from one storage server to another. Now, I > have a couple servers that have the auto-generated snapshot, without disks, > and I cannot delete them. The VM will not start and there is the complaint > that the disks are illegal. > > Any help would be appreciated. I'm going to bed for now, but will try to > wake up earlier. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > -- > Regards, > Eyal Shenitzky > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From nsoffer at redhat.com Wed Feb 21 08:48:13 2018 From: nsoffer at redhat.com (Nir Soffer) Date: Wed, 21 Feb 2018 08:48:13 +0000 Subject: [ovirt-users] Disk image upload pausing In-Reply-To: References: <20180220133302.E3CD8E1D50@smtp01.mail.de> Message-ID: The vdsm and ovirt-imageio-daemon logs on the host selected for the upload can help to understand this issue. Nir ?????? ??? ??, 21 ????? 2018, 7:10, ??? Idan Shaby ?: > On Tue, Feb 20, 2018 at 3:33 PM, wrote: > >> Hi, >> >> Here are lines I have found for my last faulty try : >> >> ENGINE >> 2018-02-19 17:52:27,283+01 INFO >> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] >> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [1320afb0] Running >> command: TransferImageStatusCommand internal: true. Entities affected : >> ID: aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group >> CREATE_DISK with role type USER >> 2018-02-19 17:52:27,290+01 INFO >> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] >> (EE-ManagedThreadFactory-engineScheduled-Thread-3) [1320afb0] Lock freed to >> object 'EngineLock:{exclusiveLocks='', sharedLocks='[]'}' >> 2018-02-19 17:52:28,658+01 INFO >> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] >> (default task-14) [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] Running command: >> TransferImageStatusCommand internal: false. Entities affected : ID: >> aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK >> with role type USER >> 2018-02-19 17:52:28,659+01 INFO >> [org.ovirt.engine.core.bll.storage.disk.image.ImageTransferUpdater] >> (default task-14) [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] Updating image >> transfer 1c55d561-45bf-4e57-b3b6-8fcbf3734a28 (image >> af5997c5-ae69-4677-9d86-30a978cf83a5) phase to Paused by System (message: >> 'Sent 405200MB') >> 2018-02-19 17:52:28,665+01 WARN >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> (default task-14) [32ba50ad-f7cd-4b7e-87b9-f7ad8a73a946] EVENT_ID: >> UPLOAD_IMAGE_NETWORK_ERROR(1,038), Unable to upload image to disk >> af5997c5-ae69-4677-9d86-30a978cf83a5 due to a network error. Make sure >> ovirt-imageio-proxy service is installed and configured, and ovirt-engine's >> certificate is registered as a valid CA in the browser. The certificate can >> be fetched from https:// >> /ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA >> 2018-02-19 17:52:32,624+01 INFO >> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] >> (default task-28) [0eba65e6-cca8-46ee-9038-fb29838ead47] Running command: >> TransferImageStatusCommand internal: false. Entities affected : ID: >> aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK >> with role type USER >> 2018-02-19 17:52:36,679+01 INFO >> [org.ovirt.engine.core.bll.storage.disk.image.TransferImageStatusCommand] >> (default task-16) [5d662615-a4e7-412b-8ecf-45be03c7e49f] Running command: >> TransferImageStatusCommand internal: false. Entities affected : ID: >> aaa00000-0000-0000-0000-123456789aaa Type: SystemAction group CREATE_DISK >> with role type USER >> 2018-02-19 17:52:37,304+01 INFO >> [org.ovirt.engine.core.bll.storage.disk.image.TransferDiskImageCommand] >> (EE-ManagedThreadFactory-engineScheduled-Thread-40) >> [6f21de96-d3bd-4cf0-b922-394be7389d3b] Transfer was paused by system. >> Upload disk 'pfm-serv-pdc_Disk1' (id '00000000-0000-0000-0000-000000000000') >> >> PROXY >> (Thread-4087) ERROR 2018-02-19 17:52:28,644 >> images:143:root:(make_imaged_request) Failed communicating with host: A >> Connection error occurred. >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", >> line 134, in make_imaged_request >> timeout=timeout, stream=stream) >> File "/usr/lib/python2.7/site-packages/requests/sessions.py", line 576, >> in send >> r = adapter.send(request, **kwargs) >> File "/usr/lib/python2.7/site-packages/requests/adapters.py", line 415, >> in send >> raise ConnectionError(err, request=request) >> ConnectionError: ('Connection aborted.', error(32, 'Broken pipe')) >> (Thread-4087) ERROR 2018-02-19 17:52:28,645 web:112:web:(log_error) ERROR >> [10.100.0.184] PUT /images/f64acb43-d153-485d-b441-9f5d42773a03: [503] >> Failed communicating with host: A Connection error occurred. (0.01s) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/ovirt_imageio_common/web.py", >> line 64, in __call__ >> resp = self.dispatch(request) >> File "/usr/lib/python2.7/site-packages/ovirt_imageio_common/web.py", >> line 91, in dispatch >> return method(*match.groups()) >> File >> "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/http_helper.py", line >> 104, in wrapper >> return func(self, *args) >> File >> "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/http_helper.py", line >> 60, in wrapper >> ret = func(self, *args) >> File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", >> line 97, in put >> self.request.method, imaged_url, headers, body, stream) >> File "/usr/lib/python2.7/site-packages/ovirt_imageio_proxy/images.py", >> line 144, in make_imaged_request >> raise exc.HTTPServiceUnavailable(s) >> >> I saw new updates so I applied them to the whole cluster and the engine >> VM, and finally rebooted everything. >> >> My new try this morning was OK. >> >> Maybe one service just needed a restart ? >> > I am not sure. > If it happens again, please attach the full logs so we can investigate and > understand what happened there. > >> >> Regards >> >> >> >> >> >> Le 20-Feb-2018 06:54:49 +0100, ishaby at redhat.com a ?crit: >> >> >> Hi, >> >> Can you please attach the engine, vdsm, daemon and proxy logs? >> >> >> Regards, >> Idan >> >> On Mon, Feb 19, 2018 at 11:17 AM, wrote: >> >>> >>> Hi, >>> I am trying to build a new vm based on a vhd image coming from a windows >>> machine. I converted the image to raw, and I am now trying to import it in >>> the engine. >>> After setting up the CA in my browser, the import process starts but >>> stops after a while with "paused by system" status. I can resume it, but it >>> pauses without transferring more. >>> The engine logs don't explain much, I see a line for the start and the >>> next one for the pause. >>> My network seems to work correctly, and I have plenty of space in the >>> storage domain. >>> What can cause the process to pause ? >>> Regards >>> >>> ------------------------------ >>> FreeMail powered by mail.fr >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> ------------------------------ >> FreeMail powered by mail.fr >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Wed Feb 21 09:16:19 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 21 Feb 2018 10:16:19 +0100 Subject: [ovirt-users] Disk image upload pausing In-Reply-To: <20180220133302.E3CD8E1D50@smtp01.mail.de> References: <20180220133302.E3CD8E1D50@smtp01.mail.de> Message-ID: On Tue, Feb 20, 2018 at 2:33 PM, wrote: > Hi, > > Here are lines I have found for my last faulty try : > > Just to be sure it is not a problem I got before and inherited by an update from a previous version (btw: version and history of this install?): if you execute this on engine what do you get engine-config -g ImageProxyAddress Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From ladislav.humenik at 1und1.de Wed Feb 21 09:48:28 2018 From: ladislav.humenik at 1und1.de (Ladislav Humenik) Date: Wed, 21 Feb 2018 10:48:28 +0100 Subject: [ovirt-users] Unable to remove storage domain's Message-ID: Hello, we can not remove old NFS-data storage domains, this 4 are already deactivated and unattached: engine=> select id,storage_name from storage_domains where storage_name like 'bs09%'; ????????????????? id????????????????? | storage_name --------------------------------------+--------------- ?819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm ?9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm ?f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm ?a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm (4 rows) The only images which still resides in DB are OVF_STORE templates: engine=> select image_guid,storage_name,disk_description from images_storage_domain_view where storage_name like 'bs09%'; ????????????? image_guid????????????? | storage_name? | disk_description --------------------------------------+---------------+------------------ ?6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm? | OVF_STORE ?997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm? | OVF_STORE ?2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE ?85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm? | OVF_STORE ?bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm? | OVF_STORE ?797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE ?5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE ?dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE (8 rows) Current oVirt Engine version: 4.1.8.2-1.el7.centos Exception logs from engine are in attachment Do you have any magic sql statement to figure out what is causing this exception and how we can remove those storage domains without disruption ? Thank you in advance -- Ladislav Humenik -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- 2018-02-21 09:50:02,484+01 INFO [org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default task-154) [8badc63f-80cf-4211-b98f-a37604642251] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f5efd264-045b-48d5-b35c-661a30461de5=STORAGE]', sharedLocks=''}' 2018-02-21 09:50:02,509+01 INFO [org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default task-154) [8badc63f-80cf-4211-b98f-a37604642251] Running command: RemoveStorageDomainCommand internal: false. Entities affected : ID: f5efd264-045b-48d5-b35c-661a30461de5 Type: StorageAction group DELETE_STORAGE_DOMAIN with role type ADMIN 2018-02-21 09:50:02,527+01 INFO [org.ovirt.engine.core.utils.transaction.TransactionSupport] (default task-154) [8badc63f-80cf-4211-b98f-a37604642251] transaction rolled back 2018-02-21 09:50:02,529+01 ERROR [org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default task-154) [8badc63f-80cf-4211-b98f-a37604642251] Command 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' failed: CallableStatementCallback; bad SQL grammar [{call force_delete_storage_domain(?)}]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" does not exist Where: SQL statement "TRUNCATE TABLE STORAGE_DOMAIN_MAP_TABLE" PL/pgSQL function remove_entities_from_storage_domain(uuid) line 21 at SQL statement SQL statement "SELECT Remove_Entities_From_storage_domain(v_storage_domain_id)" PL/pgSQL function force_delete_storage_domain(uuid) line 3 at PERFORM 2018-02-21 09:50:02,530+01 ERROR [org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default task-154) [8badc63f-80cf-4211-b98f-a37604642251] Exception: org.springframework.jdbc.BadSqlGrammarException: CallableStatementCallback; bad SQL grammar [{call force_delete_storage_domain(?)}]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" does not exist Where: SQL statement "TRUNCATE TABLE STORAGE_DOMAIN_MAP_TABLE" PL/pgSQL function remove_entities_from_storage_domain(uuid) line 21 at SQL statement SQL statement "SELECT Remove_Entities_From_storage_domain(v_storage_domain_id)" PL/pgSQL function force_delete_storage_domain(uuid) line 3 at PERFORM at org.springframework.jdbc.support.SQLErrorCodeSQLExceptionTranslator.doTranslate(SQLErrorCodeSQLExceptionTranslator.java:231) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:73) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1094) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.call(JdbcTemplate.java:1130) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.simple.AbstractJdbcCall.executeCallInternal(AbstractJdbcCall.java:405) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.simple.AbstractJdbcCall.doExecute(AbstractJdbcCall.java:365) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.simple.SimpleJdbcCall.execute(SimpleJdbcCall.java:198) [spring-jdbc.jar:4.2.4.RELEASE] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:135) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeImpl(SimpleJdbcCallsHandler.java:130) [dal.jar:] at org.ovirt.engine.core.dal.dbbroker.SimpleJdbcCallsHandler.executeModification(SimpleJdbcCallsHandler.java:76) [dal.jar:] at org.ovirt.engine.core.dao.StorageDomainDaoImpl.remove(StorageDomainDaoImpl.java:132) [dal.jar:] at org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand.lambda$executeCommand$0(RemoveStorageDomainCommand.java:76) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInNewTransaction(TransactionSupport.java:202) [utils.jar:] at org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand.executeCommand(RemoveStorageDomainCommand.java:74) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1251) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1391) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2055) [bll.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:164) [utils.jar:] at org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:103) [utils.jar:] at org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1451) [bll.jar:] at org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:397) [bll.jar:] at org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:516) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:498) [bll.jar:] at org.ovirt.engine.core.bll.Backend.runAction(Backend.java:451) [bll.jar:] at sun.reflect.GeneratedMethodAccessor529.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437) at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:70) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:80) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.weld.ejb.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:93) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437) at org.ovirt.engine.core.bll.interceptors.CorrelationIdTrackerInterceptor.aroundInvoke(CorrelationIdTrackerInterceptor.java:13) [bll.jar:] at sun.reflect.GeneratedMethodAccessor251.invoke(Unknown Source) [:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at org.jboss.as.ee.component.ManagedReferenceLifecycleMethodInterceptor.processInvocation(ManagedReferenceLifecycleMethodInterceptor.java:89) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:437) at org.jboss.weld.ejb.AbstractEJBRequestScopeActivationInterceptor.aroundInvoke(AbstractEJBRequestScopeActivationInterceptor.java:73) [weld-core-impl-2.3.5.Final.jar:2.3.5.Final] at org.jboss.as.weld.ejb.EjbRequestScopeActivationInterceptor.processInvocation(EjbRequestScopeActivationInterceptor.java:83) [wildfly-weld-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45) [wildfly-ee-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:21) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) at org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.component.singleton.SingletonComponentInstanceAssociationInterceptor.processInvocation(SingletonComponentInstanceAssociationInterceptor.java:53) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.tx.CMTTxInterceptor.invokeInNoTx(CMTTxInterceptor.java:263) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.supports(CMTTxInterceptor.java:374) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.as.ejb3.tx.CMTTxInterceptor.processInvocation(CMTTxInterceptor.java:243) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.component.interceptors.CurrentInvocationContextInterceptor.processInvocation(CurrentInvocationContextInterceptor.java:41) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.component.invocationmetrics.WaitTimeInterceptor.processInvocation(WaitTimeInterceptor.java:47) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.security.SecurityContextInterceptor.processInvocation(SecurityContextInterceptor.java:100) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.deployment.processors.StartupAwaitInterceptor.processInvocation(StartupAwaitInterceptor.java:22) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.component.interceptors.ShutDownInterceptorFactory$1.processInvocation(ShutDownInterceptorFactory.java:64) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ejb3.component.interceptors.LoggingInterceptor.processInvocation(LoggingInterceptor.java:67) [wildfly-ejb3-10.1.0.Final.jar:10.1.0.Final] at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.as.ee.component.NamespaceContextInterceptor.processInvocation(NamespaceContextInterceptor.java:50) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.ContextClassLoaderInterceptor.processInvocation(ContextClassLoaderInterceptor.java:64) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356) at org.wildfly.security.manager.WildFlySecurityManager.doChecked(WildFlySecurityManager.java:636) at org.jboss.invocation.AccessCheckingInterceptor.processInvocation(AccessCheckingInterceptor.java:61) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.InterceptorContext.run(InterceptorContext.java:356) at org.jboss.invocation.PrivilegedWithCombinerInterceptor.processInvocation(PrivilegedWithCombinerInterceptor.java:80) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) at org.jboss.as.ee.component.ViewService$View.invoke(ViewService.java:198) at org.jboss.as.ee.component.ViewDescription$1.processInvocation(ViewDescription.java:185) at org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:340) at org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:61) at org.jboss.as.ee.component.ProxyInvocationHandler.invoke(ProxyInvocationHandler.java:73) at org.ovirt.engine.core.common.interfaces.BackendLocal$$$view3.runAction(Unknown Source) [common.jar:] at org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.runAction(GenericApiGWTServiceImpl.java:171) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [rt.jar:1.8.0_161] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [rt.jar:1.8.0_161] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [rt.jar:1.8.0_161] at java.lang.reflect.Method.invoke(Method.java:498) [rt.jar:1.8.0_161] at com.google.gwt.user.server.rpc.RPC.invokeAndEncodeResponse(RPC.java:561) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processCall(RemoteServiceServlet.java:265) at com.google.gwt.user.server.rpc.RemoteServiceServlet.processPost(RemoteServiceServlet.java:305) at com.google.gwt.user.server.rpc.AbstractRemoteServiceServlet.doPost(AbstractRemoteServiceServlet.java:62) at javax.servlet.http.HttpServlet.service(HttpServlet.java:707) [jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final] at org.ovirt.engine.ui.frontend.server.gwt.GenericApiGWTServiceImpl.service(GenericApiGWTServiceImpl.java:77) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) [jboss-servlet-api_3.1_spec-1.0.0.Final.jar:1.0.0.Final] at io.undertow.servlet.handlers.ServletHandler.handleRequest(ServletHandler.java:85) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:129) at org.ovirt.engine.core.utils.servlet.HeaderFilter.doFilter(HeaderFilter.java:94) [utils.jar:] at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.ovirt.engine.ui.frontend.server.gwt.GwtCachingFilter.doFilter(GwtCachingFilter.java:132) at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.ovirt.engine.core.branding.BrandingFilter.doFilter(BrandingFilter.java:73) [branding.jar:] at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at org.ovirt.engine.core.utils.servlet.LocaleFilter.doFilter(LocaleFilter.java:66) [utils.jar:] at io.undertow.servlet.core.ManagedFilter.doFilter(ManagedFilter.java:61) at io.undertow.servlet.handlers.FilterHandler$FilterChainImpl.doFilter(FilterHandler.java:131) at io.undertow.servlet.handlers.FilterHandler.handleRequest(FilterHandler.java:84) at io.undertow.servlet.handlers.security.ServletSecurityRoleHandler.handleRequest(ServletSecurityRoleHandler.java:62) at io.undertow.servlet.handlers.ServletDispatchingHandler.handleRequest(ServletDispatchingHandler.java:36) at org.wildfly.extension.undertow.security.SecurityContextAssociationHandler.handleRequest(SecurityContextAssociationHandler.java:78) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.security.SSLInformationAssociationHandler.handleRequest(SSLInformationAssociationHandler.java:131) at io.undertow.servlet.handlers.security.ServletAuthenticationCallHandler.handleRequest(ServletAuthenticationCallHandler.java:57) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.security.handlers.AuthenticationConstraintHandler.handleRequest(AuthenticationConstraintHandler.java:53) at io.undertow.security.handlers.AbstractConfidentialityHandler.handleRequest(AbstractConfidentialityHandler.java:46) at io.undertow.servlet.handlers.security.ServletConfidentialityConstraintHandler.handleRequest(ServletConfidentialityConstraintHandler.java:64) at io.undertow.servlet.handlers.security.ServletSecurityConstraintHandler.handleRequest(ServletSecurityConstraintHandler.java:59) at io.undertow.security.handlers.AuthenticationMechanismsHandler.handleRequest(AuthenticationMechanismsHandler.java:60) at io.undertow.servlet.handlers.security.CachedAuthenticatedSessionHandler.handleRequest(CachedAuthenticatedSessionHandler.java:77) at io.undertow.security.handlers.NotificationReceiverHandler.handleRequest(NotificationReceiverHandler.java:50) at io.undertow.security.handlers.AbstractSecurityContextAssociationHandler.handleRequest(AbstractSecurityContextAssociationHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at org.wildfly.extension.undertow.security.jacc.JACCContextIdHandler.handleRequest(JACCContextIdHandler.java:61) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.server.handlers.PredicateHandler.handleRequest(PredicateHandler.java:43) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:292) at io.undertow.servlet.handlers.ServletInitialHandler.access$100(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:138) at io.undertow.servlet.handlers.ServletInitialHandler$2.call(ServletInitialHandler.java:135) at io.undertow.servlet.core.ServletRequestContextThreadSetupAction$1.call(ServletRequestContextThreadSetupAction.java:48) at io.undertow.servlet.core.ContextClassLoaderSetupAction$1.call(ContextClassLoaderSetupAction.java:43) at io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44) at io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44) at io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44) at io.undertow.servlet.api.LegacyThreadSetupActionWrapper$1.call(LegacyThreadSetupActionWrapper.java:44) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:272) at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:104) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202) at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:805) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_161] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] Caused by: org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" does not exist Where: SQL statement "TRUNCATE TABLE STORAGE_DOMAIN_MAP_TABLE" PL/pgSQL function remove_entities_from_storage_domain(uuid) line 21 at SQL statement SQL statement "SELECT Remove_Entities_From_storage_domain(v_storage_domain_id)" PL/pgSQL function force_delete_storage_domain(uuid) line 3 at PERFORM at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2157) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:1886) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:255) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:555) at org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags(AbstractJdbc2Statement.java:417) at org.postgresql.jdbc2.AbstractJdbc2Statement.execute(AbstractJdbc2Statement.java:410) at org.jboss.jca.adapters.jdbc.CachedPreparedStatement.execute(CachedPreparedStatement.java:303) at org.jboss.jca.adapters.jdbc.WrappedPreparedStatement.execute(WrappedPreparedStatement.java:442) at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1133) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate$6.doInCallableStatement(JdbcTemplate.java:1130) [spring-jdbc.jar:4.2.4.RELEASE] at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:1078) [spring-jdbc.jar:4.2.4.RELEASE] ... 153 more 2018-02-21 09:50:02,552+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-154) [8badc63f-80cf-4211-b98f-a37604642251] EVENT_ID: USER_REMOVE_STORAGE_DOMAIN_FAILED(961), Correlation ID: 8badc63f-80cf-4211-b98f-a37604642251, Job ID: 38ec4e85-6455-426c-a1b2-67d36bd996e9, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to remove Storage Domain bs09aF2C9kvm. (User: admin at internal) 2018-02-21 09:50:02,561+01 INFO [org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default task-154) [8badc63f-80cf-4211-b98f-a37604642251] Lock freed to object 'EngineLock:{exclusiveLocks='[f5efd264-045b-48d5-b35c-661a30461de5=STORAGE]', sharedLocks=''}' From artem.tambovskiy at gmail.com Wed Feb 21 09:52:14 2018 From: artem.tambovskiy at gmail.com (Artem Tambovskiy) Date: Wed, 21 Feb 2018 12:52:14 +0300 Subject: [ovirt-users] Fwd: why host is not capable to run HE? In-Reply-To: References: Message-ID: I took a HE VM down and stopped ovirt-ha-agents on both hosts. Tried hosted-engine --reinitialize-lockspace the command just silently executes and I'm not sure if it doing something at all. I also tried to clean the metadata. On one host it went correct, on second host it always failing with following messages: INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:VDSM domain monitor status: PENDING INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:VDSM domain monitor status: PENDING INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:VDSM domain monitor status: PENDING INFO:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:VDSM domain monitor status: PENDING ERROR:ovirt_hosted_engine_ha.agent.hosted_engine.HostedEngine:Failed to start monitoring domain (sd_uuid=4a7f8717-9bb0-4d80-8016-498fa4b88162, host_id=2): timeout during domain acquisition ERROR:ovirt_hosted_engine_ha.agent.agent.Agent:Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 191, in _run_agent return action(he) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py", line 67, in action_clean return he.clean(options.force_cleanup) File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 345, in clean self._initialize_domain_monitor() File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/hosted_engine.py", line 829, in _initialize_domain_monitor raise Exception(msg) Exception: Failed to start monitoring domain (sd_uuid=4a7f8717-9bb0-4d80-8016-498fa4b88162, host_id=2): timeout during domain acquisition ERROR:ovirt_hosted_engine_ha.agent.agent.Agent:Trying to restart agent WARNING:ovirt_hosted_engine_ha.agent.agent.Agent:Restarting agent, attempt '0' ERROR:ovirt_hosted_engine_ha.agent.agent.Agent:Too many errors occurred, giving up. Please review the log and consider filing a bug. INFO:ovirt_hosted_engine_ha.agent.agent.Agent:Agent shutting down I'm not an expert when it comes to read the sanlock but the output looks a bit strange to me: from first host (host_id=2) [root at ovirt1 ~]# sanlock client status daemon b1d7fea2-e8a9-4645-b449-97702fc3808e.ovirt1.tel p -1 helper p -1 listener p -1 status p 3763 p 62861 quaggaVM p 63111 powerDNS p 107818 pjsip_freepbx_14 p 109092 revizorro_dev p 109589 routerVM s hosted-engine:2:/var/run/vdsm/storage/4a7f8717-9bb0-4d80- 8016-498fa4b88162/093faa75-5e33-4559-84fa-1f1f8d48153b/ 911c7637-b49d-463e-b186-23b404e50769:0 s a40cc3a9-54d6-40fd-acee-525ef29c8ce3:2:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_data/a40cc3a9-54d6-40fd-acee-525ef29c8ce3/dom_md/ids:0 s 4a7f8717-9bb0-4d80-8016-498fa4b88162:1:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_engine/4a7f8717-9bb0-4d80-8016-498fa4b88162/dom_md/ids:0 r a40cc3a9-54d6-40fd-acee-525ef29c8ce3:SDM:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_data/a40cc3a9-54d6-40fd-acee-525ef29c8ce3/dom_md/leases:1048576:49 p 3763 from second host (host_id=1) [root at ovirt2 ~]# sanlock client status daemon 9263e081-e5ea-416b-866a-0a73fe32fe16.ovirt2.tel p -1 helper p -1 listener p 150440 CentOS-Desk p 151061 centos-dev-box p 151288 revizorro_nfq p 151954 gitlabVM p -1 status s hosted-engine:1:/var/run/vdsm/storage/4a7f8717-9bb0-4d80- 8016-498fa4b88162/093faa75-5e33-4559-84fa-1f1f8d48153b/ 911c7637-b49d-463e-b186-23b404e50769:0 s a40cc3a9-54d6-40fd-acee-525ef29c8ce3:1:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_data/a40cc3a9-54d6-40fd-acee-525ef29c8ce3/dom_md/ids:0 s 4a7f8717-9bb0-4d80-8016-498fa4b88162:1:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_engine/4a7f8717-9bb0-4d80-8016-498fa4b88162/dom_md/ids:0 ADD Not sure if there is a problem with locspace 4a7f8717-9bb0-4d80-8016-498fa4b88162, but both hosts showing 1 as a host_id here. Is this correct? Should't they have different Id's here? Once ha-agent's has been started hosted-engine --vm-status showing 'unknow-stale-data' for the second host. And HE just doesn't start on second host at all. Host redeployment haven't helped as well. Any advises on this? Regards, Artem On Mon, Feb 19, 2018 at 9:32 PM, Artem Tambovskiy < artem.tambovskiy at gmail.com> wrote: > Thanks Martin. > > As you suggested I updated hosted-engine.conf with correct host_id values > and restarted ovirt-ha-agent services on both hosts and now I run into the > problem with status "unknown-stale-data" :( > And second host still doesn't looks as capable to run HE. > > Should I stop HE VM, bring down ovirt-ha-agents and reinitialize-lockspace > and start ovirt-ha-agents again? > > Regards, > Artem > > > > On Mon, Feb 19, 2018 at 6:45 PM, Martin Sivak wrote: > >> Hi Artem, >> >> just a restart of ovirt-ha-agent services should be enough. >> >> Best regards >> >> Martin Sivak >> >> On Mon, Feb 19, 2018 at 4:40 PM, Artem Tambovskiy >> wrote: >> > Ok, understood. >> > Once I set correct host_id on both hosts how to take changes in force? >> With >> > minimal downtime? Or i need reboot both hosts anyway? >> > >> > Regards, >> > Artem >> > >> > 19 ????. 2018 ?. 18:18 ???????????? "Simone Tiraboschi" >> > ???????: >> > >> >> >> >> >> >> On Mon, Feb 19, 2018 at 4:12 PM, Artem Tambovskiy >> >> wrote: >> >>> >> >>> >> >>> Thanks a lot, Simone! >> >>> >> >>> This is clearly shows a problem: >> >>> >> >>> [root at ov-eng ovirt-engine]# sudo -u postgres psql -d engine -c >> 'select >> >>> vds_name, vds_spm_id from vds' >> >>> vds_name | vds_spm_id >> >>> -----------------+------------ >> >>> ovirt1.local | 2 >> >>> ovirt2.local | 1 >> >>> (2 rows) >> >>> >> >>> While hosted-engine.conf on ovirt1.local have host_id=1, and >> ovirt2.local >> >>> host_id=2. So totally opposite values. >> >>> So how to get this fixed in the simple way? Update the engine DB? >> >> >> >> >> >> I'd suggest to manually fix /etc/ovirt-hosted-engine/hosted-engine.conf >> on >> >> both the hosts >> >> >> >>> >> >>> >> >>> Regards, >> >>> Artem >> >>> >> >>> On Mon, Feb 19, 2018 at 5:37 PM, Simone Tiraboschi < >> stirabos at redhat.com> >> >>> wrote: >> >>>> >> >>>> >> >>>> >> >>>> On Mon, Feb 19, 2018 at 12:13 PM, Artem Tambovskiy >> >>>> wrote: >> >>>>> >> >>>>> Hello, >> >>>>> >> >>>>> Last weekend my cluster suffered form a massive power outage due to >> >>>>> human mistake. >> >>>>> I'm using SHE setup with Gluster, I managed to bring the cluster up >> >>>>> quickly, but once again I have a problem with duplicated host_id >> >>>>> (https://bugzilla.redhat.com/show_bug.cgi?id=1543988) on second >> host and due >> >>>>> to this second host is not capable to run HE. >> >>>>> >> >>>>> I manually updated file hosted_engine.conf with correct host_id and >> >>>>> restarted agent & broker - no effect. Than I rebooted the host >> itself - >> >>>>> still no changes. How to fix this issue? >> >>>> >> >>>> >> >>>> I'd suggest to run this command on the engine VM: >> >>>> sudo -u postgres scl enable rh-postgresql95 -- psql -d engine -c >> >>>> 'select vds_name, vds_spm_id from vds' >> >>>> (just sudo -u postgres psql -d engine -c 'select vds_name, >> vds_spm_id >> >>>> from vds' if still on 4.1) and check >> >>>> /etc/ovirt-hosted-engine/hosted-engine.conf on all the involved >> host. >> >>>> Maybe you can also have a leftover configuration file on undeployed >> >>>> host. >> >>>> >> >>>> When you find a conflict you should manually bring down sanlock >> >>>> In doubt a reboot of both the hosts will solve for sure. >> >>>> >> >>>> >> >>>>> >> >>>>> >> >>>>> Regards, >> >>>>> Artem >> >>>>> >> >>>>> _______________________________________________ >> >>>>> Users mailing list >> >>>>> Users at ovirt.org >> >>>>> http://lists.ovirt.org/mailman/listinfo/users >> >>>>> >> >>>> >> >>> >> >>> >> >>> >> >>> _______________________________________________ >> >>> Users mailing list >> >>> Users at ovirt.org >> >>> http://lists.ovirt.org/mailman/listinfo/users >> >>> >> >> >> > >> > _______________________________________________ >> > Users mailing list >> > Users at ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/users >> > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Wed Feb 21 10:45:36 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Wed, 21 Feb 2018 11:45:36 +0100 Subject: [ovirt-users] info about vdsClient and vdsm-client Message-ID: Hello, on my cluster that has an history from 4.0.6 to 4.1.9 I find vdsClient installed. I don't remember if I manually installed it or if it was part of a default/standard install. Now that it is deprecated: https://www.ovirt.org/develop/developer-guide/vdsm/vdsclient/ I notice I have not the vdsm-client package installed. Does it make sense to automatically pull it in for upgrades? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From lorenzetto.luca at gmail.com Wed Feb 21 10:53:46 2018 From: lorenzetto.luca at gmail.com (Luca 'remix_tj' Lorenzetto) Date: Wed, 21 Feb 2018 11:53:46 +0100 Subject: [ovirt-users] info about vdsClient and vdsm-client In-Reply-To: References: Message-ID: On Wed, Feb 21, 2018 at 11:45 AM, Gianluca Cecchi wrote: > Hello, > on my cluster that has an history from 4.0.6 to 4.1.9 I find vdsClient > installed. > I don't remember if I manually installed it or if it was part of a > default/standard install. > > Now that it is deprecated: > https://www.ovirt.org/develop/developer-guide/vdsm/vdsclient/ > > I notice I have not the vdsm-client package installed. > Does it make sense to automatically pull it in for upgrades? > Thanks, > Gianluca Hello Gianluca, [root at kvm01 ~]# vds vdsClient vdsm-client vdsm-tool That's what i find in a default setup. -- "E' assurdo impiegare gli uomini di intelligenza eccellente per fare calcoli che potrebbero essere affidati a chiunque se si usassero delle macchine" Gottfried Wilhelm von Leibnitz, Filosofo e Matematico (1646-1716) "Internet ? la pi? grande biblioteca del mondo. Ma il problema ? che i libri sono tutti sparsi sul pavimento" John Allen Paulos, Matematico (1945-vivente) Luca 'remix_tj' Lorenzetto, http://www.remixtj.net , From msivak at redhat.com Wed Feb 21 11:34:51 2018 From: msivak at redhat.com (Martin Sivak) Date: Wed, 21 Feb 2018 12:34:51 +0100 Subject: [ovirt-users] Reinitializing lockspace In-Reply-To: References: Message-ID: Hi, the bug you found describes the right procedure for reinitializing the lockspace indeed. The hosted engine tool just packages the script to make it easier for you to use. You should check whether all hosted engine tools are down first (systemctl stop ovirt-ha-agent ovirt-ha-broker) on all hosts before attempting the reinitialization. Also check the storage connection you the lockspace and that all hosts have different host id in /etc/ovirt-hosted-engine/hosted-engine.conf. I can't help you more without logs and more details about the issue. Like what version you are using and what happened that you started looking into logs in the first place. Best regards Martin Sivak On Wed, Feb 21, 2018 at 12:04 AM, Jamie Lawrence wrote: > Hello, > > I have a sanlock problem. I don't fully understand the logs, but from what I can gather, messages like this means it ain't working. > > 2018-02-16 14:51:46 22123 [15036]: s1 renewal error -107 delta_length 0 last_success 22046 > 2018-02-16 14:51:47 22124 [15036]: 53977885 aio collect RD 0x7fe5040008c0:0x7fe5040008d0:0x7fe518922000 result -107:0 match res > 2018-02-16 14:51:47 22124 [15036]: s1 delta_renew read rv -107 offset 0 /rhev/data-center/mnt/glusterSD/sc5-gluster-10g-1.squaretrade.com:ovirt__images/53977885-0887-48d0-a02c-8d9e3faec93c/dom_md/ids > > I attempted `hosted-engine --reinitialize-lockspace --force`, which didn't appear to do anything, but who knows. > > I downed everything and and tried `sanlock direct init -s ....`, which caused sanlock to dump core. > > At this point the only thing I can think of to do is down everything, whack and manually recreate the lease files and try again. I'm worried that that will lose something that the setup did or will otherwise destroy the installation. It looks like this has been done by others[1], but the references I can find are a bit old, so I'm unsure if that is still a valid approach. > > So, questions: > > - Will that work? > - Is there something I should do instead of that? > > Thanks, > > -j > > > [1] https://bugzilla.redhat.com/show_bug.cgi?id=1116469 > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From eshenitz at redhat.com Wed Feb 21 11:50:12 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Wed, 21 Feb 2018 13:50:12 +0200 Subject: [ovirt-users] Unable to remove storage domain's In-Reply-To: References: Message-ID: According to the logs, it seems like you somehow missing a table in the DB - STORAGE_DOMAIN_MAP_TABLE. 4211-b98f-a37604642251] Command 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' failed: CallableStatementCallback; bad SQL grammar [{call force_delete_storage_domain(?)}]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" does not exist Did you tryied to run some SQL query which cause that issue? On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik < ladislav.humenik at 1und1.de> wrote: > Hello, > > we can not remove old NFS-data storage domains, this 4 are already > deactivated and unattached: > > engine=> select id,storage_name from storage_domains where storage_name > like 'bs09%'; > id | storage_name > --------------------------------------+--------------- > 819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm > 9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm > f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm > a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm > (4 rows) > > > The only images which still resides in DB are OVF_STORE templates: > > engine=> select image_guid,storage_name,disk_description from > images_storage_domain_view where storage_name like 'bs09%'; > image_guid | storage_name | disk_description > --------------------------------------+---------------+------------------ > 6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm | OVF_STORE > 997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm | OVF_STORE > 2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE > 85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm | OVF_STORE > bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm | OVF_STORE > 797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE > 5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE > dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE > (8 rows) > > > > Current oVirt Engine version: 4.1.8.2-1.el7.centos > Exception logs from engine are in attachment > > Do you have any magic sql statement to figure out what is causing this > exception and how we can remove those storage domains without disruption ? > > Thank you in advance > > -- > Ladislav Humenik > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Wed Feb 21 12:38:00 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Wed, 21 Feb 2018 13:38:00 +0100 Subject: [ovirt-users] Disk image upload pausing In-Reply-To: References: Message-ID: <20180221123801.1B297E2281@smtp01.mail.de> Hi, I get this : ImageProxyAddress:hosted-ovirt-engine.domain.loc:54323 version: general Regards Le 21-Feb-2018 10:16:23 +0100, gianluca.cecchi at gmail.com a crit: On Tue, Feb 20, 2018 at 2:33 PM, wrote: Hi, Here are lines I have found for my last faulty try : Just to be sure it is not a problem I got before and inherited by an update from a previous version (btw: version and history of this install?): if you execute this on engine what do you get engine-config -g ImageProxyAddress Gianluca ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From spfma.tech at e.mail.fr Wed Feb 21 12:41:22 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Wed, 21 Feb 2018 13:41:22 +0100 Subject: [ovirt-users] Multiple 'scsi' controllers with index '0'. In-Reply-To: <20180212084329.E4CE8E2266@smtp01.mail.de> References: <20180212084329.E4CE8E2266@smtp01.mail.de> Message-ID: <20180221124122.18FDEE1D50@smtp01.mail.de> Hi, I just got the same problem again, with version 4.2.2-1. This time I wanted to add one more disk to a newly created machine, then it was not able to start again. Neither disabling or removing the disk helps. Is there a way to tweak the configuration to remove this offending device, so that I can start the machine again ? Regards Le 12-Feb-2018 09:43:53 +0100, spfma.tech at e.mail.fr a crit: Hi, I have tried this but it didn't solved the problem. I removed the disk and tried to boot with an ISO but no more success. As I need to work on what was installed on this disk, I tried the most violent but efficient solution : destroying the VM and recreating it, keeping its mac address. Le 09-Feb-2018 14:29:41 +0100, gianluca.cecchi at gmail.com a crit: Il 09 Feb 2018 13:50, ha scritto: I have just done it. Is it possible to tweak this XML file (where ?) in order to get a working VM ? Regards Le 09-Feb-2018 12:44:08 +0100, fromani at redhat.com a crit: Hi, could you please file a bug? Please attach the failing XML, you should find it pretty easily in the Vdsm logs. Thanks, On 02/09/2018 12:08 PM, spfma.tech at e.mail.fr wrote: Hi, I just wanted to increase the number of CPUs for a VM and after validating, I got the following error when I try to start it: VM vm-test is down with error. Exit message: XML error: Multiple 'scsi' controllers with index '0'. I am sure it is a bug, but for now, what can I do in order to remove or edit conflicting devices definitions ? I need to be able to start this machine. 4.2.0.2-1.el7.centos (as I still don't manage to update the hosted engine to something newer) Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Francesco Romani Senior SW Eng., Virtualization R&D Red Hat IRC: fromani github: @fromanirh ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users I seem to remember a similar problem and that deactivating disks of the VM and the then activating them again corrected the problem. Or in case that doesn't work, try to remove disks and Then readd from the floating disk pane... Hih, gianluca ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From ladislav.humenik at 1und1.de Wed Feb 21 12:57:44 2018 From: ladislav.humenik at 1und1.de (Ladislav Humenik) Date: Wed, 21 Feb 2018 13:57:44 +0100 Subject: [ovirt-users] Unable to remove storage domain's In-Reply-To: References: Message-ID: Hi, no this table "STORAGE_DOMAIN_MAP_TABLE" is not present at any of our ovirt's and based on link this is just a temporary table. Can you point me to what query should I test? thank you in advance Ladislav On 21.02.2018 12:50, Eyal Shenitzky wrote: > According to the logs, it seems like you somehow missing a table in > the DB - > STORAGE_DOMAIN_MAP_TABLE. > 4211-b98f-a37604642251] Command > 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' > failed: CallableStatementCallback; bad SQL grammar [{call > force_delete_storage_domain(?)}]; nested exception is > org.postgresql.util.PSQLException: ERROR: relation > "storage_domain_map_table" does not exist > Did you tryied to run some SQL query which cause that issue? > > > > On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik > > wrote: > > Hello, > > we can not remove old NFS-data storage domains, this 4 are already > deactivated and unattached: > > engine=> select id,storage_name from storage_domains where > storage_name like 'bs09%'; > ????????????????? id????????????????? | storage_name > --------------------------------------+--------------- > ?819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm > ?9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm > ?f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm > ?a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm > (4 rows) > > > The only images which still resides in DB are OVF_STORE templates: > > engine=> select image_guid,storage_name,disk_description from > images_storage_domain_view where storage_name like 'bs09%'; > ????????????? image_guid????????????? | storage_name? | > disk_description > --------------------------------------+---------------+------------------ > ?6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm? | OVF_STORE > ?997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm? | OVF_STORE > ?2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE > ?85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm? | OVF_STORE > ?bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm? | OVF_STORE > ?797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE > ?5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE > ?dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE > (8 rows) > > > > Current oVirt Engine version: 4.1.8.2-1.el7.centos > Exception logs from engine are in attachment > > Do you have any magic sql statement to figure out what is causing > this exception and how we can remove those storage domains without > disruption ? > > Thank you in advance > > -- > Ladislav Humenik > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > > > > -- > Regards, > Eyal Shenitzky -- Ladislav Humenik System administrator / VI IT Operations Hosting Infrastructure 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany Phone: +49 721 91374-8361 E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg Aufsichtsratsvorsitzender: Ren? Obermann Member of United Internet Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Wed Feb 21 13:03:28 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Wed, 21 Feb 2018 15:03:28 +0200 Subject: [ovirt-users] Unable to remove storage domain's In-Reply-To: References: Message-ID: Did you manage to set the domain to maintenance? If so you can try to 'Destroy' the domain. On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik wrote: > Hi, no > > > this table "STORAGE_DOMAIN_MAP_TABLE" is not present at any of our ovirt's > and > > based on link > > this is just a temporary table. Can you point me to what query should I > test? > > thank you in advance > > Ladislav > > On 21.02.2018 12:50, Eyal Shenitzky wrote: > > According to the logs, it seems like you somehow missing a table in the DB > - > > STORAGE_DOMAIN_MAP_TABLE. > > 4211-b98f-a37604642251] Command 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' failed: CallableStatementCallback; bad SQL grammar [{call force_delete_storage_domain(?)}]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" does not exist > > Did you tryied to run some SQL query which cause that issue? > > > > > On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik < > ladislav.humenik at 1und1.de> wrote: > >> Hello, >> >> we can not remove old NFS-data storage domains, this 4 are already >> deactivated and unattached: >> >> engine=> select id,storage_name from storage_domains where storage_name >> like 'bs09%'; >> id | storage_name >> --------------------------------------+--------------- >> 819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm >> 9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm >> f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm >> a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm >> (4 rows) >> >> >> The only images which still resides in DB are OVF_STORE templates: >> >> engine=> select image_guid,storage_name,disk_description from >> images_storage_domain_view where storage_name like 'bs09%'; >> image_guid | storage_name | disk_description >> --------------------------------------+---------------+------------------ >> 6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm | OVF_STORE >> 997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm | OVF_STORE >> 2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE >> 85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm | OVF_STORE >> bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm | OVF_STORE >> 797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE >> 5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE >> dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE >> (8 rows) >> >> >> >> Current oVirt Engine version: 4.1.8.2-1.el7.centos >> Exception logs from engine are in attachment >> >> Do you have any magic sql statement to figure out what is causing this >> exception and how we can remove those storage domains without disruption ? >> >> Thank you in advance >> >> -- >> Ladislav Humenik >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Regards, > Eyal Shenitzky > > > -- > Ladislav Humenik > > System administrator / VI > IT Operations Hosting Infrastructure > > 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany > Phone: +49 721 91374-8361 <+49%20721%20913748361> > E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de > > Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 > > Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg > Aufsichtsratsvorsitzender: Ren? Obermann > > > Member of United Internet > > Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. > > This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From ladislav.humenik at 1und1.de Wed Feb 21 13:17:10 2018 From: ladislav.humenik at 1und1.de (Ladislav Humenik) Date: Wed, 21 Feb 2018 14:17:10 +0100 Subject: [ovirt-users] Unable to remove storage domain's In-Reply-To: References: Message-ID: Hi, of course i did. I put these domain's first in to maintenance, then Detached it from the datacenter. The last step is destroy or remove "just name it" and this last step is mysteriously not working. and throwing sql exception which I attached before. Thank you in advance ladislav On 21.02.2018 14:03, Eyal Shenitzky wrote: > Did you manage to set the domain to maintenance? > > If so you can try to 'Destroy' the domain. > > On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik > > wrote: > > Hi, no > > > this table "STORAGE_DOMAIN_MAP_TABLE" is not present at any of our > ovirt's and > > based on link > > this is just a temporary table. Can you point me to what query > should I test? > > thank you in advance > > Ladislav > > > On 21.02.2018 12:50, Eyal Shenitzky wrote: >> According to the logs, it seems like you somehow missing a table >> in the DB - >> STORAGE_DOMAIN_MAP_TABLE. >> 4211-b98f-a37604642251] Command >> 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' >> failed: CallableStatementCallback; bad SQL grammar [{call >> force_delete_storage_domain(?)}]; nested exception is >> org.postgresql.util.PSQLException: ERROR: relation >> "storage_domain_map_table" does not exist >> Did you tryied to run some SQL query which cause that issue? >> >> >> >> On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik >> > wrote: >> >> Hello, >> >> we can not remove old NFS-data storage domains, this 4 are >> already deactivated and unattached: >> >> engine=> select id,storage_name from storage_domains where >> storage_name like 'bs09%'; >> ????????????????? id????????????????? | storage_name >> --------------------------------------+--------------- >> ?819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm >> ?9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm >> ?f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm >> ?a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm >> (4 rows) >> >> >> The only images which still resides in DB are OVF_STORE >> templates: >> >> engine=> select image_guid,storage_name,disk_description from >> images_storage_domain_view where storage_name like 'bs09%'; >> ????????????? image_guid????????????? | storage_name? | >> disk_description >> --------------------------------------+---------------+------------------ >> ?6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm? | OVF_STORE >> ?997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm? | OVF_STORE >> ?2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE >> ?85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm? | OVF_STORE >> ?bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm? | OVF_STORE >> ?797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE >> ?5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE >> ?dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE >> (8 rows) >> >> >> >> Current oVirt Engine version: 4.1.8.2-1.el7.centos >> Exception logs from engine are in attachment >> >> Do you have any magic sql statement to figure out what is >> causing this exception and how we can remove those storage >> domains without disruption ? >> >> Thank you in advance >> >> -- >> Ladislav Humenik >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> >> >> >> -- >> Regards, >> Eyal Shenitzky > > -- > Ladislav Humenik > > System administrator / VI > IT Operations Hosting Infrastructure > > 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany > Phone:+49 721 91374-8361 > E-Mail:ladislav.humenik at 1und1.de | Web:www.1und1.de > > Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 > > Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg > Aufsichtsratsvorsitzender: Ren? Obermann > > > Member of United Internet > > Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. > > This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. > > > > > -- > Regards, > Eyal Shenitzky -- Ladislav Humenik System administrator / VI IT Operations Hosting Infrastructure 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany Phone: +49 721 91374-8361 E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg Aufsichtsratsvorsitzender: Ren? Obermann Member of United Internet Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpnbnncpehjpckhg.png Type: image/png Size: 10782 bytes Desc: not available URL: From Robert.Langley at ventura.org Wed Feb 21 14:01:36 2018 From: Robert.Langley at ventura.org (Langley, Robert) Date: Wed, 21 Feb 2018 14:01:36 +0000 Subject: [ovirt-users] Use of virtual disks In-Reply-To: References: , Message-ID: Right. I know about understand those features. The problem is that whenever I try them, they fail. Constant complaints about the snapshot. Which I am unable to delete. How else can I work with the disk files? I need something on a more advanced level. Get Outlook for iOS ________________________________ From: Eyal Shenitzky Sent: Wednesday, February 21, 2018 12:01:57 AM To: Langley, Robert Cc: users at ovirt.org Subject: Re: [ovirt-users] Use of virtual disks Hi Robert, If I understand correctly you are trying to share disks between two VMs, In Ovirt you can set a disk as sharable by editing the disk properties. A shareable disk can be attached to multiple VMs. If you wish to detach a disk from one VM and attach it to another VM it is simple too, select the VM that hold the relevant disk and detach the disk from it, you will find the detached disk under the 'disks' tab. To attach the floating disk to another VM, select the desired VM, go to the disks tab and press 'attach'. You will see all the floating disks that exist in the data-center. I hope it helped, please ask if you have more questions. For more information, you can visit at - https://www.ovirt.org/documentation/admin-guide/chap-Virtual_Machine_Disks/ On Wed, Feb 21, 2018 at 9:16 AM, Langley, Robert > wrote: My first experience with this situation using oVirt. I do come from using VMWare and have been also been using oVirt for several years. We've also paid for RHEV, but migration is held up at the moment. I am trying to prepare for that. My problem is how it does not appear to be so simple to utilize the VM disks. In VMWare it is so simple. Snapshot or not, in vSphere I can take the virtual disk file and use it for another VM when needed. It doesn't make sense to me in oVirt. I have another entry here about my issue that lead to this need, where I am not able to delete snapshot files for those disks I was attempting to live migrate and there was an issue... Now, the empty snapshot files are preventing some VMs from starting. It seems I should be able to take the VM disk files, without the snapshots, and use them with another VM. But, that does not appear possible from what I can tell in oVirt. I desperately need to get one specific VM going. The other two, no worries. I was able to restore from backup, one of the effected VMs. The third is not important at all and can easily by re-created. Is anyone experienced with taking VM disks from one and using them (without snapshots) with another VM? I could really use some sort of workaround. Thanks if anyone can come up with a good answer that would help. -Robert L. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Wed Feb 21 14:01:08 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Wed, 21 Feb 2018 16:01:08 +0200 Subject: [ovirt-users] Unable to remove storage domain's In-Reply-To: References: Message-ID: Note that destroy and remove are two different operations. Did you try both? On Wed, Feb 21, 2018 at 3:17 PM, Ladislav Humenik wrote: > Hi, of course i did. I put these domain's first in to maintenance, then > Detached it from the datacenter. > > The last step is destroy or remove "just name it" and this last step is > mysteriously not working. > > > and throwing sql exception which I attached before. > > Thank you in advance > ladislav > > On 21.02.2018 14:03, Eyal Shenitzky wrote: > > Did you manage to set the domain to maintenance? > > If so you can try to 'Destroy' the domain. > > On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik < > ladislav.humenik at 1und1.de> wrote: > >> Hi, no >> >> >> this table "STORAGE_DOMAIN_MAP_TABLE" is not present at any of our >> ovirt's and >> >> based on link >> >> this is just a temporary table. Can you point me to what query should I >> test? >> >> thank you in advance >> >> Ladislav >> >> On 21.02.2018 12:50, Eyal Shenitzky wrote: >> >> According to the logs, it seems like you somehow missing a table in the >> DB - >> >> STORAGE_DOMAIN_MAP_TABLE. >> >> 4211-b98f-a37604642251] Command 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' failed: CallableStatementCallback; bad SQL grammar [{call force_delete_storage_domain(?)}]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" does not exist >> >> Did you tryied to run some SQL query which cause that issue? >> >> >> >> >> On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik < >> ladislav.humenik at 1und1.de> wrote: >> >>> Hello, >>> >>> we can not remove old NFS-data storage domains, this 4 are already >>> deactivated and unattached: >>> >>> engine=> select id,storage_name from storage_domains where storage_name >>> like 'bs09%'; >>> id | storage_name >>> --------------------------------------+--------------- >>> 819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm >>> 9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm >>> f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm >>> a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm >>> (4 rows) >>> >>> >>> The only images which still resides in DB are OVF_STORE templates: >>> >>> engine=> select image_guid,storage_name,disk_description from >>> images_storage_domain_view where storage_name like 'bs09%'; >>> image_guid | storage_name | disk_description >>> --------------------------------------+---------------+----- >>> ------------- >>> 6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm | OVF_STORE >>> 997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm | OVF_STORE >>> 2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE >>> 85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm | OVF_STORE >>> bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm | OVF_STORE >>> 797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE >>> 5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE >>> dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE >>> (8 rows) >>> >>> >>> >>> Current oVirt Engine version: 4.1.8.2-1.el7.centos >>> Exception logs from engine are in attachment >>> >>> Do you have any magic sql statement to figure out what is causing this >>> exception and how we can remove those storage domains without disruption ? >>> >>> Thank you in advance >>> >>> -- >>> Ladislav Humenik >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> >> >> -- >> Regards, >> Eyal Shenitzky >> >> >> -- >> Ladislav Humenik >> >> System administrator / VI >> IT Operations Hosting Infrastructure >> >> 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany >> Phone: +49 721 91374-8361 <+49%20721%20913748361> >> E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de >> >> Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 >> >> Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg >> Aufsichtsratsvorsitzender: Ren? Obermann >> >> >> Member of United Internet >> >> Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. >> >> This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. >> >> > > > -- > Regards, > Eyal Shenitzky > > > -- > Ladislav Humenik > > System administrator / VI > IT Operations Hosting Infrastructure > > 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany > Phone: +49 721 91374-8361 <+49%20721%20913748361> > E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de > > Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 > > Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg > Aufsichtsratsvorsitzender: Ren? Obermann > > > Member of United Internet > > Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. > > This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. > > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpnbnncpehjpckhg.png Type: image/png Size: 10782 bytes Desc: not available URL: From ladislav.humenik at 1und1.de Wed Feb 21 14:07:00 2018 From: ladislav.humenik at 1und1.de (Ladislav Humenik) Date: Wed, 21 Feb 2018 15:07:00 +0100 Subject: [ovirt-users] Unable to remove storage domain's In-Reply-To: References: Message-ID: Hi, yes, destroy is also not working and throwing SQL exception detailed logs from engine by doing destroy in attachment thank you in advance ladislav On 21.02.2018 15:01, Eyal Shenitzky wrote: > Note that destroy and remove are two different operations. > > Did you try?both? > > On Wed, Feb 21, 2018 at 3:17 PM, Ladislav Humenik > > wrote: > > Hi, of course i did. I put these domain's first in to maintenance, > then Detached it from the datacenter. > > The last step is destroy or remove "just name it" and this last > step is mysteriously not working. > > > > and throwing sql exception which I attached before. > > Thank you in advance > ladislav > > On 21.02.2018 14:03, Eyal Shenitzky wrote: >> Did you manage to set the domain to maintenance? >> >> If so you can try to 'Destroy' the domain. >> >> On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik >> > wrote: >> >> Hi, no >> >> >> this table "STORAGE_DOMAIN_MAP_TABLE" is not present at any >> of our ovirt's and >> >> based on link >> >> this is just a temporary table. Can you point me to what >> query should I test? >> >> thank you in advance >> >> Ladislav >> >> >> On 21.02.2018 12:50, Eyal Shenitzky wrote: >>> According to the logs, it seems like you somehow missing a >>> table in the DB - >>> STORAGE_DOMAIN_MAP_TABLE. >>> 4211-b98f-a37604642251] Command >>> 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' >>> failed: CallableStatementCallback; bad SQL grammar [{call >>> force_delete_storage_domain(?)}]; nested exception is >>> org.postgresql.util.PSQLException: ERROR: relation >>> "storage_domain_map_table" does not exist >>> Did you tryied to run some SQL query which cause that issue? >>> >>> >>> >>> On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik >>> >> > wrote: >>> >>> Hello, >>> >>> we can not remove old NFS-data storage domains, this 4 >>> are already deactivated and unattached: >>> >>> engine=> select id,storage_name from storage_domains >>> where storage_name like 'bs09%'; >>> id????????????????? | storage_name >>> --------------------------------------+--------------- >>> ?819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm >>> ?9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm >>> ?f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm >>> ?a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm >>> (4 rows) >>> >>> >>> The only images which still resides in DB are OVF_STORE >>> templates: >>> >>> engine=> select image_guid,storage_name,disk_description >>> from images_storage_domain_view where storage_name like >>> 'bs09%'; >>> image_guid????????????? | storage_name? | disk_description >>> --------------------------------------+---------------+------------------ >>> ?6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm? | >>> OVF_STORE >>> ?997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm? | >>> OVF_STORE >>> ?2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | >>> OVF_STORE >>> ?85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm? | >>> OVF_STORE >>> ?bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm? | >>> OVF_STORE >>> ?797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | >>> OVF_STORE >>> ?5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | >>> OVF_STORE >>> ?dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | >>> OVF_STORE >>> (8 rows) >>> >>> >>> >>> Current oVirt Engine version: 4.1.8.2-1.el7.centos >>> Exception logs from engine are in attachment >>> >>> Do you have any magic sql statement to figure out what >>> is causing this exception and how we can remove those >>> storage domains without disruption ? >>> >>> Thank you in advance >>> >>> -- >>> Ladislav Humenik >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> >>> >>> -- >>> Regards, >>> Eyal Shenitzky >> >> -- >> Ladislav Humenik >> >> System administrator / VI >> IT Operations Hosting Infrastructure >> >> 1&1 Internet SE |Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany >> >> Phone:+49 721 91374-8361 >> E-Mail:ladislav.humenik at 1und1.de | Web:www.1und1.de >> >> Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 >> >> Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg >> Aufsichtsratsvorsitzender: Ren? Obermann >> >> >> Member of United Internet >> >> Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. >> >> This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. >> >> >> >> >> -- >> Regards, >> Eyal Shenitzky > > -- > Ladislav Humenik > > System administrator / VI > IT Operations Hosting Infrastructure > > 1&1 Internet SE |Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany > > Phone:+49 721 91374-8361 > E-Mail:ladislav.humenik at 1und1.de | Web:www.1und1.de > > Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 > > Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg > Aufsichtsratsvorsitzender: Ren? Obermann > > > Member of United Internet > > Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. > > This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. > > > > > -- > Regards, > Eyal Shenitzky -- Ladislav Humenik System administrator / VI IT Operations Hosting Infrastructure 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany Phone: +49 721 91374-8361 E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg Aufsichtsratsvorsitzender: Ren? Obermann Member of United Internet Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpnbnncpehjpckhg.png Type: image/png Size: 10782 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: destroy.log Type: text/x-log Size: 8425 bytes Desc: not available URL: From spfma.tech at e.mail.fr Wed Feb 21 14:38:31 2018 From: spfma.tech at e.mail.fr (spfma.tech at e.mail.fr) Date: Wed, 21 Feb 2018 15:38:31 +0100 Subject: [ovirt-users] VM with "a lot" of disks : OK ? Message-ID: <20180221143831.CF940E2266@smtp01.mail.de> Hi, Is there any kind of penalty or risk using something like a dozen of separate disks for a VM stored on a NFS datastore ? Regards ------------------------------------------------------------------------------------------------- FreeMail powered by mail.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.slezka at slu.cz Wed Feb 21 14:43:25 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Wed, 21 Feb 2018 15:43:25 +0100 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> Message-ID: <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> On 02/20/2018 11:09 PM, Arik Hadas wrote: > > > On Tue, Feb 20, 2018 at 6:37 PM, Ji?? Sl??ka > wrote: > > On 02/20/2018 03:48 PM, Arik Hadas wrote: > > > > > > On Tue, Feb 20, 2018 at 3:49 PM, Ji?? Sl??ka > > >> wrote: > > > >? ? ?Hi Arik, > > > >? ? ?On 02/20/2018 01:22 PM, Arik Hadas wrote: > >? ? ?> > >? ? ?> > >? ? ?> On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka > > > >? ? ?> > >>> wrote: > >? ? ?> > >? ? ?>? ? ?Hi, > >? ? ?> > >? ? ?> > >? ? ?> Hi Ji??, > >? ? ?> ? > >? ? ?> > >? ? ?> > >? ? ?>? ? ?I would like to try import some ova files into our oVirt > instance [1] > >? ? ?>? ? ?[2] but I facing problems. > >? ? ?> > >? ? ?>? ? ?I have downloaded all ova images into one of hosts > (ovirt01) into > >? ? ?>? ? ?direcory /ova > >? ? ?> > >? ? ?>? ? ?ll /ova/ > >? ? ?>? ? ?total 6532872 > >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 > HAAS-hpcowrie.ovf > >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 > HAAS-hpdio.ova > >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 846736896 Feb 16 16:22 > HAAS-hpjdwpd.ova > >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 891043328 Feb 16 16:23 > HAAS-hptelnetd.ova > >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 908222464 Feb 16 16:23 > HAAS-hpuchotcp.ova > >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 880643072 Feb 16 16:24 > HAAS-hpuchoudp.ova > >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 890833920 Feb 16 16:24 > HAAS-hpuchoweb.ova > >? ? ?> > >? ? ?>? ? ?Then I tried to import them - from host ovirt01 and > directory /ova but > >? ? ?>? ? ?spinner spins infinitly and nothing is happen. > >? ? ?> > >? ? ?> > >? ? ?> And does it work when you provide a path to the actual ova > file, i.e., > >? ? ?> /ova/HAAS-hpdio.ova, rather than to the directory? > > > >? ? ?this time it ends with "Failed to load VM configuration from > OVA file: > >? ? ?/ova/HAAS-hpdio.ova" error.? > > > > > > Note that the logic that is applied on a specified folder is "try > > fetching an 'ova folder' out of the destination folder" rather than > > "list all the ova files inside the specified folder". It seems > that you > > expected the former output since there are no disks in that > folder, right? > > yes, It would be more user friendly to list all ova files and then > select which one to import (like listing all vms in vmware import) > > Maybe description of path field in manager should be "Path to ova file" > instead of "Path" :-) > > > Sorry, I obviously meant 'latter' rather than 'former' before.. > Yeah, I agree that would be better, at least until listing the OVA files > in the folder is implemented (that was the original plan, btw) - could > you please file a bug? yes, sure > >? ? ?>? ? ?I cannot see anything relevant in vdsm log of host ovirt01. > >? ? ?> > >? ? ?>? ? ?In the engine.log of our standalone ovirt manager is just this > >? ? ?>? ? ?relevant line > >? ? ?> > >? ? ?>? ? ?2018-02-20 12:35:04,289+01 INFO > >? ? ?>? ? ?[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default > >? ? ?>? ? ?task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible > >? ? ?>? ? ?command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > >? ? ?>? ? ?[/usr/bin/ansible-playbook, > >? ? ?>? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > >? ? ?>? ? ?--inventory=/tmp/ansible-inventory8237874608161160784, > >? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova, > >? ? ?>? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: > >? ? ?>? ? ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > >? ? ? > > >? ? ?>? ? ? > >? ? ? >>.slu.cz.log] > >? ? ?> > >? ? ?>? ? ?also there are two ansible processes which are still running > >? ? ?(and makes > >? ? ?>? ? ?heavy load on system (load 9+ and growing, it looks like it > >? ? ?eats all the > >? ? ?>? ? ?memory and system starts swapping)) > >? ? ?> > >? ? ?>? ? ?ovirt? ? 32087? 3.3? 0.0 332252? 5980 ?? ? ? ? Sl? > ?12:35? ?0:41 > >? ? ?>? ? ?/usr/bin/python2 /usr/bin/ansible-playbook > >? ? ?>? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > >? ? ?>? ? ?--inventory=/tmp/ansible-inventory8237874608161160784 > >? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova > >? ? ?>? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > >? ? ?>? ? ?ovirt? ? 32099 57.5 78.9 15972880 11215312 ?? ?R? ? > 12:35? 11:52 > >? ? ?>? ? ?/usr/bin/python2 /usr/bin/ansible-playbook > >? ? ?>? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > >? ? ?>? ? ?--inventory=/tmp/ansible-inventory8237874608161160784 > >? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova > >? ? ?>? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > >? ? ?> > >? ? ?>? ? ?playbook looks like > >? ? ?> > >? ? ?>? ? ?- hosts: all > >? ? ?>? ? ?? remote_user: root > >? ? ?>? ? ?? gather_facts: no > >? ? ?> > >? ? ?>? ? ?? roles: > >? ? ?>? ? ?? ? - ovirt-ova-query > >? ? ?> > >? ? ?>? ? ?and it looks like it only runs query_ova.py but on all > hosts? > >? ? ?> > >? ? ?> > >? ? ?> No, the engine provides ansible the host to run on when it > >? ? ?executes the > >? ? ?> playbook. > >? ? ?> It would only be executed on the selected host. > >? ? ?> ? > >? ? ?> > >? ? ?> > >? ? ?>? ? ?How does this work? ...or should it work? > >? ? ?> > >? ? ?> > >? ? ?> It should, especially that part of querying the OVA and is > supposed to > >? ? ?> be really quick. > >? ? ?> Can you please share the engine log and > >? ? ?> > >? ? > ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > >? ? ? > > >? ? ?> > >? ? ? >>.slu.cz.log ? > > > >? ? ?engine log is here: > > > >? ? ?https://pastebin.com/nWWM3UUq > > > > > > Thanks. > > Alright, so now the configuration is fetched but its processing fails. > > We fixed many issues in this area recently, but it appears that > > something is wrong with the actual size of the disk within the ovf file > > that resides inside this ova file. > > Can you please share that ovf file that resides inside?/ova/HAAS-hpdio.ova? > > file HAAS-hpdio.ova > HAAS-hpdio.ova: POSIX tar archive (GNU) > > [root at ovirt01 backup]# tar xvf HAAS-hpdio.ova > HAAS-hpdio.ovf > HAAS-hpdio-disk001.vmdk > > file HAAS-hpdio.ovf is here: > > https://pastebin.com/80qAU0wB > > > Thanks again. > So that seems to be a VM that was exported from Virtual Box, right? > They don't do anything that violates the OVF specification but they do > some non-common things that we don't anticipate: yes, it is most likely ova from VirtualBox > First, they don't specify the actual size of the disk and the current > code in oVirt relies on that property. > There is a workaround for this though: you can extract an OVA file, edit > its OVF configuration - adding ovf:populatedSize="X" (and change > ovf:capacity as I'll describe next) to the Disk element inside the > DiskSection and pack the OVA again (tar cvf X is either: > 1. the actual size of the vmdk file + some buffer (iirc, we used to take > 15% of extra space for the conversion) > 2. if you're using a file storage or you don't mind consuming more > storage space on your block storage, simply set X to the virtual size of > the disk (in bytes) as indicated by the ovf:capacity filed, e.g., > ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova. > > Second, the virtual size (indicated by ovf:capacity) is specified in > bytes. The specification says that the default unit of allocation shall > be bytes, but practically every OVA file that I've ever saw specified it > in GB and the current code in oVirt kind of assumes that this is the > case without checking the ovf:capacityAllocationUnits attribute that > could indicate the real unit of allocation [1]. > Anyway, long story short, the virtual size of the disk should currently > be specified in GB, e.g., ovf:populatedSize="20" in the case of > HAAS-hpdio.ova. wow, thanks for this excellent explanation. I have changed this in ovf file ... That should do it. If not, please share the OVA file and I will examine > it in my environment. original file is at https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova > > [1]?https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOvaReader.java#L220 > > > > >? ? ?file > >? ? > ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > >? ? ? > > >? ? ?in the fact does not exists (nor folder /var/log/ovirt-engine/ova/) > > > > > > This issue is also resolved in 4.2.2. > > In the meantime, please create the ?/var/log/ovirt-engine/ova/ folder > > manually and make sure its permissions match the ones of the other > > folders in ?/var/log/ovirt-engine. > > ok, done. After another try there is this log file > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173005-ovirt01.net > .slu.cz.log > > https://pastebin.com/M5J44qur > > > Is it the log of the execution of the ansible playbook that was provided > with a path to the /ova folder? > I'm interested in that in order to see how comes that its execution > never completed. well, I dont think so, it is log from import with full path to ova file > ? > > > > >? ? ?Cheers, > > > >? ? ?Jiri Slezka > > > >? ? ?> ? > >? ? ?> > >? ? ?> > >? ? ?>? ? ?I am using latest 4.2.1.7-1.el7.centos version > >? ? ?> > >? ? ?>? ? ?Cheers, > >? ? ?>? ? ?Jiri Slezka > >? ? ?> > >? ? ?> > >? ? ?>? ? ?[1] https://haas.cesnet.cz/#!index.md > > > > >? ? ?>? ? ? > >? ? ? >> - Cesnet HAAS > >? ? ?>? ? ?[2] https://haas.cesnet.cz/downloads/release-01/ > > >? ? ? > > >? ? ?>? ? ? > >? ? ? >> - Image repository > >? ? ?> > >? ? ?> > >? ? ?>? ? ?_______________________________________________ > >? ? ?>? ? ?Users mailing list > >? ? ?>? ? ?Users at ovirt.org > > >? ? ? > >> > >? ? ?>? ? ?http://lists.ovirt.org/mailman/listinfo/users > > >? ? ? > > >? ? ?>? ? ? > >? ? ? >> > >? ? ?> > >? ? ?> > > > > > > > >? ? ?_______________________________________________ > >? ? ?Users mailing list > >? ? ?Users at ovirt.org > > > >? ? ?http://lists.ovirt.org/mailman/listinfo/users > > >? ? ? > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From Robert.Langley at ventura.org Wed Feb 21 14:54:35 2018 From: Robert.Langley at ventura.org (Langley, Robert) Date: Wed, 21 Feb 2018 14:54:35 +0000 Subject: [ovirt-users] Cannot delete auto-generated snapshot In-Reply-To: References: , Message-ID: I have the server down (the one that was used as the hyperviser when the live migration was done). I?ll get the logs in about an hour and a half. I?ll check them first with Log Reaper. Get Outlook for iOS ________________________________ From: Eyal Shenitzky Sent: Wednesday, February 21, 2018 12:38:09 AM To: Langley, Robert; users at ovirt.org Subject: Re: [ovirt-users] Cannot delete auto-generated snapshot Hi Robert, The Auto-generated snapshot created while performing live storage migration of disks (moving disks from one storage domain to another while the VM is up), the Auto-generated snapshot should be removed automatically by the engine when the live migration ends. The log you sent me doesn't contain the disk migration that you described. Please send me some older log so I will be able to investigate if there was a problem. On Tue, Feb 20, 2018 at 6:38 PM, Langley, Robert > wrote: Attached now is the vdsm log from the hypervisor currently hosting the VMs. ________________________________ From: Eyal Shenitzky > Sent: Tuesday, February 20, 2018 2:49 AM To: Langley, Robert Cc: users at ovirt.org Subject: Re: [ovirt-users] Cannot delete auto-generated snapshot Hey Robert, Can you please attach the VDSM and Engine log? Also, please write the version of the engine you are working with. On Tue, Feb 20, 2018 at 12:17 PM, Langley, Robert > wrote: I was moving some virtual disks from one storage server to another. Now, I have a couple servers that have the auto-generated snapshot, without disks, and I cannot delete them. The VM will not start and there is the complaint that the disks are illegal. Any help would be appreciated. I'm going to bed for now, but will try to wake up earlier. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Regards, Eyal Shenitzky -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccox at endlessnow.com Wed Feb 21 14:54:34 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Wed, 21 Feb 2018 08:54:34 -0600 Subject: [ovirt-users] VM with "a lot" of disks : OK ? In-Reply-To: <20180221143831.CF940E2266@smtp01.mail.de> References: <20180221143831.CF940E2266@smtp01.mail.de> Message-ID: <133c483c-3cd4-cdda-bc17-55728609d296@endlessnow.com> On 02/21/2018 08:38 AM, spfma.tech at e.mail.fr wrote: > Hi, > Is there any kind of penalty or risk using something like a dozen of separate > disks for a VM stored on a NFS datastore ? > Regards I don't use NFS, we use iSCSI SAN, but we have some hosts with that many disks (or more). From ykaul at redhat.com Wed Feb 21 14:54:16 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 21 Feb 2018 16:54:16 +0200 Subject: [ovirt-users] VM with "a lot" of disks : OK ? In-Reply-To: <20180221143831.CF940E2266@smtp01.mail.de> References: <20180221143831.CF940E2266@smtp01.mail.de> Message-ID: On Wed, Feb 21, 2018 at 4:38 PM, wrote: > Hi, > Is there any kind of penalty or risk using something like a dozen of > separate disks for a VM stored on a NFS datastore ? > No. However, note that I believe in some cases NFS performance is slower than iSCSI (which can use multipathing and multiple connections). Y. > Regards > > ------------------------------ > FreeMail powered by mail.fr > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lveyde at redhat.com Wed Feb 21 15:31:05 2018 From: lveyde at redhat.com (Lev Veyde) Date: Wed, 21 Feb 2018 17:31:05 +0200 Subject: [ovirt-users] [ANN] oVirt 4.2.2 Second Release Candidate is now available Message-ID: The oVirt Project is pleased to announce the availability of the oVirt 4.2.2 Second Release Candidate, as of February 21th, 2018 This update is a release candidate of the second in a series of stabilization updates to the 4.2 series. This is pre-release software. This pre-release should not to be used in production. This release is available now for: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later This release supports Hypervisor Hosts running: * Red Hat Enterprise Linux 7.4 or later * CentOS Linux (or similar) 7.4 or later * oVirt Node 4.2 See the release notes [1] for installation / upgrade instructions and a list of new features and bugs fixed. Notes: - oVirt Appliance is already available - oVirt Node will be available soon [2] Additional Resources: * Read more about the oVirt 4.2.2 release highlights: http://www.ovirt.org/release/4. 2 . 2 / * Get more oVirt Project updates on Twitter: https://twitter.com/ovirt * Check out the latest project news on the oVirt blog: http://www.ovirt.org/blog/ [1] http://www.ovirt.org/release/4. 2 . 2 / [2] http://resources.ovirt.org/pub/ovirt-4. 2-pre /iso/ -- Lev Veyde Software Engineer, RHCE | RHCVA | MCITP Red Hat Israel lev at redhat.com | lveyde at redhat.com TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.slezka at slu.cz Wed Feb 21 16:03:50 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Wed, 21 Feb 2018 17:03:50 +0100 Subject: [ovirt-users] problem importing ova vm In-Reply-To: <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> Message-ID: <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> On 02/21/2018 03:43 PM, Ji?? Sl??ka wrote: > On 02/20/2018 11:09 PM, Arik Hadas wrote: >> >> >> On Tue, Feb 20, 2018 at 6:37 PM, Ji?? Sl??ka > > wrote: >> >> On 02/20/2018 03:48 PM, Arik Hadas wrote: >> > >> > >> > On Tue, Feb 20, 2018 at 3:49 PM, Ji?? Sl??ka >> > >> wrote: >> > >> >? ? ?Hi Arik, >> > >> >? ? ?On 02/20/2018 01:22 PM, Arik Hadas wrote: >> >? ? ?> >> >? ? ?> >> >? ? ?> On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka >> > >> >? ? ?> >> >>> wrote: >> >? ? ?> >> >? ? ?>? ? ?Hi, >> >? ? ?> >> >? ? ?> >> >? ? ?> Hi Ji??, >> >? ? ?> ? >> >? ? ?> >> >? ? ?> >> >? ? ?>? ? ?I would like to try import some ova files into our oVirt >> instance [1] >> >? ? ?>? ? ?[2] but I facing problems. >> >? ? ?> >> >? ? ?>? ? ?I have downloaded all ova images into one of hosts >> (ovirt01) into >> >? ? ?>? ? ?direcory /ova >> >? ? ?> >> >? ? ?>? ? ?ll /ova/ >> >? ? ?>? ? ?total 6532872 >> >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 >> HAAS-hpcowrie.ovf >> >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 >> HAAS-hpdio.ova >> >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 846736896 Feb 16 16:22 >> HAAS-hpjdwpd.ova >> >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 891043328 Feb 16 16:23 >> HAAS-hptelnetd.ova >> >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 908222464 Feb 16 16:23 >> HAAS-hpuchotcp.ova >> >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 880643072 Feb 16 16:24 >> HAAS-hpuchoudp.ova >> >? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 890833920 Feb 16 16:24 >> HAAS-hpuchoweb.ova >> >? ? ?> >> >? ? ?>? ? ?Then I tried to import them - from host ovirt01 and >> directory /ova but >> >? ? ?>? ? ?spinner spins infinitly and nothing is happen. >> >? ? ?> >> >? ? ?> >> >? ? ?> And does it work when you provide a path to the actual ova >> file, i.e., >> >? ? ?> /ova/HAAS-hpdio.ova, rather than to the directory? >> > >> >? ? ?this time it ends with "Failed to load VM configuration from >> OVA file: >> >? ? ?/ova/HAAS-hpdio.ova" error.? >> > >> > >> > Note that the logic that is applied on a specified folder is "try >> > fetching an 'ova folder' out of the destination folder" rather than >> > "list all the ova files inside the specified folder". It seems >> that you >> > expected the former output since there are no disks in that >> folder, right? >> >> yes, It would be more user friendly to list all ova files and then >> select which one to import (like listing all vms in vmware import) >> >> Maybe description of path field in manager should be "Path to ova file" >> instead of "Path" :-) >> >> >> Sorry, I obviously meant 'latter' rather than 'former' before.. >> Yeah, I agree that would be better, at least until listing the OVA files >> in the folder is implemented (that was the original plan, btw) - could >> you please file a bug? > > yes, sure > > >> >? ? ?>? ? ?I cannot see anything relevant in vdsm log of host ovirt01. >> >? ? ?> >> >? ? ?>? ? ?In the engine.log of our standalone ovirt manager is just this >> >? ? ?>? ? ?relevant line >> >? ? ?> >> >? ? ?>? ? ?2018-02-20 12:35:04,289+01 INFO >> >? ? ?>? ? ?[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default >> >? ? ?>? ? ?task-31) [458990a7-b054-491a-904e-5c4fe44892c4] Executing Ansible >> >? ? ?>? ? ?command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin >> >? ? ?>? ? ?[/usr/bin/ansible-playbook, >> >? ? ?>? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, >> >? ? ?>? ? ?--inventory=/tmp/ansible-inventory8237874608161160784, >> >? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova, >> >? ? ?>? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: >> >? ? ?>? ? ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net >> >> >? ? ?> > >> >? ? ?>? ? ?> >> >? ? ?> >>.slu.cz.log] >> >? ? ?> >> >? ? ?>? ? ?also there are two ansible processes which are still running >> >? ? ?(and makes >> >? ? ?>? ? ?heavy load on system (load 9+ and growing, it looks like it >> >? ? ?eats all the >> >? ? ?>? ? ?memory and system starts swapping)) >> >? ? ?> >> >? ? ?>? ? ?ovirt? ? 32087? 3.3? 0.0 332252? 5980 ?? ? ? ? Sl? >> ?12:35? ?0:41 >> >? ? ?>? ? ?/usr/bin/python2 /usr/bin/ansible-playbook >> >? ? ?>? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa >> >? ? ?>? ? ?--inventory=/tmp/ansible-inventory8237874608161160784 >> >? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova >> >? ? ?>? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml >> >? ? ?>? ? ?ovirt? ? 32099 57.5 78.9 15972880 11215312 ?? ?R? ? >> 12:35? 11:52 >> >? ? ?>? ? ?/usr/bin/python2 /usr/bin/ansible-playbook >> >? ? ?>? ? ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa >> >? ? ?>? ? ?--inventory=/tmp/ansible-inventory8237874608161160784 >> >? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova >> >? ? ?>? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml >> >? ? ?> >> >? ? ?>? ? ?playbook looks like >> >? ? ?> >> >? ? ?>? ? ?- hosts: all >> >? ? ?>? ? ?? remote_user: root >> >? ? ?>? ? ?? gather_facts: no >> >? ? ?> >> >? ? ?>? ? ?? roles: >> >? ? ?>? ? ?? ? - ovirt-ova-query >> >? ? ?> >> >? ? ?>? ? ?and it looks like it only runs query_ova.py but on all >> hosts? >> >? ? ?> >> >? ? ?> >> >? ? ?> No, the engine provides ansible the host to run on when it >> >? ? ?executes the >> >? ? ?> playbook. >> >? ? ?> It would only be executed on the selected host. >> >? ? ?> ? >> >? ? ?> >> >? ? ?> >> >? ? ?>? ? ?How does this work? ...or should it work? >> >? ? ?> >> >? ? ?> >> >? ? ?> It should, especially that part of querying the OVA and is >> supposed to >> >? ? ?> be really quick. >> >? ? ?> Can you please share the engine log and >> >? ? ?> >> >? ? >> ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net >> >> >? ? ?> > >> >? ? ?> > >> >? ? ?> >>.slu.cz.log ? >> > >> >? ? ?engine log is here: >> > >> >? ? ?https://pastebin.com/nWWM3UUq >> > >> > >> > Thanks. >> > Alright, so now the configuration is fetched but its processing fails. >> > We fixed many issues in this area recently, but it appears that >> > something is wrong with the actual size of the disk within the ovf file >> > that resides inside this ova file. >> > Can you please share that ovf file that resides inside?/ova/HAAS-hpdio.ova? >> >> file HAAS-hpdio.ova >> HAAS-hpdio.ova: POSIX tar archive (GNU) >> >> [root at ovirt01 backup]# tar xvf HAAS-hpdio.ova >> HAAS-hpdio.ovf >> HAAS-hpdio-disk001.vmdk >> >> file HAAS-hpdio.ovf is here: >> >> https://pastebin.com/80qAU0wB >> >> >> Thanks again. >> So that seems to be a VM that was exported from Virtual Box, right? >> They don't do anything that violates the OVF specification but they do >> some non-common things that we don't anticipate: > > yes, it is most likely ova from VirtualBox > >> First, they don't specify the actual size of the disk and the current >> code in oVirt relies on that property. >> There is a workaround for this though: you can extract an OVA file, edit >> its OVF configuration - adding ovf:populatedSize="X" (and change >> ovf:capacity as I'll describe next) to the Disk element inside the >> DiskSection and pack the OVA again (tar cvf > X is either: >> 1. the actual size of the vmdk file + some buffer (iirc, we used to take >> 15% of extra space for the conversion) >> 2. if you're using a file storage or you don't mind consuming more >> storage space on your block storage, simply set X to the virtual size of >> the disk (in bytes) as indicated by the ovf:capacity filed, e.g., >> ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova. >> >> Second, the virtual size (indicated by ovf:capacity) is specified in >> bytes. The specification says that the default unit of allocation shall >> be bytes, but practically every OVA file that I've ever saw specified it >> in GB and the current code in oVirt kind of assumes that this is the >> case without checking the ovf:capacityAllocationUnits attribute that >> could indicate the real unit of allocation [1]. >> Anyway, long story short, the virtual size of the disk should currently >> be specified in GB, e.g., ovf:populatedSize="20" in the case of >> HAAS-hpdio.ova. > > wow, thanks for this excellent explanation. I have changed this in ovf file > > ... > ... > > then I was able to import this mofified ova file (HAAS-hpdio_new.ova). > Interesting thing is that the vm was shown in vm list for while (with > state down with lock and status was initializing). After while this vm > disapeared :-o > > I am going to test it again and collect some logs... there are interesting logs in /var/log/vdsm/import/ at the host used for import http://mirror.slu.cz/tmp/ovirt-import.tar.bz2 first of them describes situation where I chose thick provisioning, second situation with thin provisioning interesting part is I believe libguestfs: command: run: qemu-img libguestfs: command: run: \ create libguestfs: command: run: \ -f qcow2 libguestfs: command: run: \ -o preallocation=off,compat=0.10 libguestfs: command: run: \ /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec libguestfs: command: run: \ 21474836480 Formatting '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec', fmt=qcow2 size=21474836480 compat=0.10 encryption=off cluster_size=65536 preallocation=off lazy_refcounts=off refcount_bits=16 libguestfs: trace: vdsm_disk_create: disk_create = 0 qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2' '/var/tmp/v2vovl2dccbd.qcow2' '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec' qemu-img: error while writing sector 1000960: No space left on device virt-v2v: error: qemu-img command failed, see earlier errors > >> That should do it. If not, please share the OVA file and I will examine >> it in my environment. > > original file is at > > https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova > >> >> [1]?https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOvaReader.java#L220 >> >> >> >> >? ? ?file >> >? ? >> ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net >> >> >? ? ?> > >> >? ? ?in the fact does not exists (nor folder /var/log/ovirt-engine/ova/) >> > >> > >> > This issue is also resolved in 4.2.2. >> > In the meantime, please create the ?/var/log/ovirt-engine/ova/ folder >> > manually and make sure its permissions match the ones of the other >> > folders in ?/var/log/ovirt-engine. >> >> ok, done. After another try there is this log file >> >> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173005-ovirt01.net >> .slu.cz.log >> >> https://pastebin.com/M5J44qur >> >> >> Is it the log of the execution of the ansible playbook that was provided >> with a path to the /ova folder? >> I'm interested in that in order to see how comes that its execution >> never completed. > > well, I dont think so, it is log from import with full path to ova file > > > >> ? >> >> >> >> >? ? ?Cheers, >> > >> >? ? ?Jiri Slezka >> > >> >? ? ?> ? >> >? ? ?> >> >? ? ?> >> >? ? ?>? ? ?I am using latest 4.2.1.7-1.el7.centos version >> >? ? ?> >> >? ? ?>? ? ?Cheers, >> >? ? ?>? ? ?Jiri Slezka >> >? ? ?> >> >? ? ?> >> >? ? ?>? ? ?[1] https://haas.cesnet.cz/#!index.md >> >> > >> >? ? ?>? ? ? >> >? ? ?> >> - Cesnet HAAS >> >? ? ?>? ? ?[2] https://haas.cesnet.cz/downloads/release-01/ >> >> >? ? ?> > >> >? ? ?>? ? ?> >> >? ? ?> >> - Image repository >> >? ? ?> >> >? ? ?> >> >? ? ?>? ? ?_______________________________________________ >> >? ? ?>? ? ?Users mailing list >> >? ? ?>? ? ?Users at ovirt.org > > >> >? ? ? >> >> >> >? ? ?>? ? ?http://lists.ovirt.org/mailman/listinfo/users >> >> >? ? ?> > >> >? ? ?>? ? ?> >> >? ? ?> >> >> >? ? ?> >> >? ? ?> >> > >> > >> > >> >? ? ?_______________________________________________ >> >? ? ?Users mailing list >> >? ? ?Users at ovirt.org >> > >> >? ? ?http://lists.ovirt.org/mailman/listinfo/users >> >> >? ? ?> > >> > >> > >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> >> > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From alkaplan at redhat.com Wed Feb 21 16:17:03 2018 From: alkaplan at redhat.com (Alona Kaplan) Date: Wed, 21 Feb 2018 18:17:03 +0200 Subject: [ovirt-users] Manageiq ovn In-Reply-To: References: <20180216172212.464828c3@t460p> <20180219120531.277ebcec@t460p> <20180219123734.174899aa@t460p> Message-ID: Hi Alexy. First of all, please reply to users at ovirt.org list, so all our users can enjoy the discussion. To summarize, currently you have two questions. 1. How to automatically trigger the provider refresh after doing changes to the provider? There is an open RFE regrading it - https://bugzilla.redhat.com/1547415, you can add yourself to its CC list to track it. 2. Adding a router with external gateway is not working since an ip address is expected in external_fixed_ips by the ovn provider but manageiq doesn't provide it. Looking at the neutron api ( https://developer.openstack.org/api-ref/network/v2/#create-router) seems the ip address is mandatory. So it is a manageiq bug (also tried to add a router with an external gateway with no ip address directly to neutron and got an error). As a workaround to the bug, you can add the router to the ovn-provider directly using the api - https://gist.github.com/dominikholler/f58658407ae7620280f4cb47c398d849 Mor, can you please open a bug regarding the issue? On Tue, Feb 20, 2018 at 12:32 PM, Aliaksei Nazarenka < aliaksei.nazarenka at gmail.com> wrote: > Hi, Alona! > Can you help ve with add external ip for creating router procedure? > > 2018-02-19 14:52 GMT+03:00 Aliaksei Nazarenka < > aliaksei.nazarenka at gmail.com>: > >> I do not really understand the essence of how this will work, you specify >> the router 10.0.0.2, while on dhcp will be distributed ip gateway 10.0.0.1? >> It seems to me that in the role of geystwey just had to act as a router, or >> am I wrong? >> >> 2018-02-19 14:46 GMT+03:00 Aliaksei Nazarenka < >> aliaksei.nazarenka at gmail.com>: >> >>> Dominik sent me this link here - https://gist.github.com/domini >>> kholler/f58658407ae7620280f4cb47c398d849 >>> >>> 2018-02-19 14:45 GMT+03:00 Aliaksei Nazarenka < >>> aliaksei.nazarenka at gmail.com>: >>> >>>> Hi, Alona! >>>> Dominik said that you can help. I need to create an external gateway in >>>> manageiq, I did not find a native way to do this. As a result of lack of ip >>>> address, I can not create a router. Here are the logs: >>>> >>>> 2018-02-19 14:22:16,942 root Starting server >>>> 2018-02-19 14:22:16,943 root Version: 1.2.5-1 >>>> 2018-02-19 14:22:16,943 root Build date: 20180117090014 >>>> 2018-02-19 14:22:16,944 root Githash: 12b705d >>>> 2018-02-19 14:23:17,250 root From: 10.0.184.20:57674 Request: POST >>>> /v2.0/tokens >>>> 2018-02-19 14:23:17,252 root Request body: >>>> {"auth": {"tenantName": "tenant", "passwordCredentials": {"username": >>>> "admin at internal", "password": ""}}} >>>> 2018-02-19 14:23:17,322 requests.packages.urllib3.connectionpool >>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>> 2018-02-19 14:23:17,322 requests.packages.urllib3.connectionpool >>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>> 2018-02-19 14:23:17,836 requests.packages.urllib3.connectionpool >>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>> 2018-02-19 14:23:17,836 requests.packages.urllib3.connectionpool >>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>> 2018-02-19 14:23:17,870 root Response code: 200 >>>> 2018-02-19 14:23:17,870 root Response body: {"access": {"token": >>>> {"expires": "2018-02-23T15:23:17Z", "id": "lmceG8s1SROKskN-H4T93IPwgwSFb >>>> 8mN7UJ6qz4HObrC2PqNWcSyS_dZGIcax6dEBVIz8H6ShgDXl_2fvflbeg"}, >>>> "serviceCatalog": [{"endpoints_links": [], "endpoints": [{"adminURL": " >>>> https://lbn-r-engine-01.mp.local:9696/", "region": "RegionOne", "id": >>>> "00000000000000000000000000000001", "internalURL": " >>>> https://lbn-r-engine-01.mp.local:9696/", "publicURL": " >>>> https://lbn-r-engine-01.mp.local:9696/"}], "type": "network", "name": >>>> "neutron"}, {"endpoints_links": [], "endpoints": [{"adminURL": " >>>> https://lbn-r-engine-01.mp.local:35357/", "region": "RegionOne", >>>> "publicURL": "https://lbn-r-engine-01.mp.local:35357/", "internalURL": >>>> "https://lbn-r-engine-01.mp.local:35357/", "id": >>>> "00000000000000000000000000000002"}], "type": "identity", "name": >>>> "keystone"}, {"endpoints_links": [], "endpoints": [{"adminURL": " >>>> https://lbn-r-engine-01.mp.local:8774/v2.1/", "region": "RegionOne", >>>> "publicURL": "https://lbn-r-engine-01.mp.local:8774/v2.1/", >>>> "internalURL": "https://lbn-r-engine-01.mp.local:8774/v2.1/", "id": >>>> "00000000000000000000000000000002"}], "type": "compute", "name": >>>> "nova"}], "user": {"username": "admin", "roles_links": [], "id": "", >>>> "roles": [{"name": "admin"}], "name": "admin"}}} >>>> 2018-02-19 14:23:17,974 root From: 10.0.184.20:43600 Request: POST >>>> /v2.0/routers >>>> 2018-02-19 14:23:17,974 root Request body: >>>> {"router":{"name":"test_router","external_gateway_info":{"ne >>>> twork_id":"17c31685-56ef-428a-94dd-3202bf407d36","external_f >>>> ixed_ips":[{"subnet_id":"c425f071-4b4e-4598-8c56-d5457a59dac >>>> 3"}],"enable_snat":0}}} >>>> 2018-02-19 14:23:17,980 requests.packages.urllib3.connectionpool >>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>> 2018-02-19 14:23:17,980 requests.packages.urllib3.connectionpool >>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>> 2018-02-19 14:23:18,391 root ip_address missing in the external gateway >>>> information. >>>> Traceback (most recent call last): >>>> File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line >>>> 131, in _handle_request >>>> method, path_parts, content) >>>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", >>>> line 175, in handle_request >>>> return self.call_response_handler(handler, content, parameters) >>>> File "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, >>>> in call_response_handler >>>> return response_handler(ovn_north, content, parameters) >>>> File "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", >>>> line 205, in post_routers >>>> router = nb_db.add_router(received_router) >>>> File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>> line 57, in wrapper >>>> validate_rest_input(rest_data) >>>> File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>> line 530, in validate_add_rest_input >>>> RouterMapper._validate_external_gateway_info(rest_data) >>>> File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>> line 565, in _validate_external_gateway_info >>>> message.format(key=RouterMapper.REST_ROUTER_IP_ADDRESS) >>>> RestDataError >>>> >>>> >>>> 2018-02-19 14:37 GMT+03:00 Dominik Holler : >>>> >>>>> >>>>> >>>>> On Mon, 19 Feb 2018 14:29:39 +0300 >>>>> Aliaksei Nazarenka wrote: >>>>> >>>>> > How is this external gateway configured? >>>>> > >>>>> >>>>> >>>>> I created this log entry by >>>>> https://gist.github.com/dominikholler/f58658407ae7620280f4cb47c398d849 >>>>> >>>>> But maybe Alona will tell you how to do this with ManageIQ tomorrow. >>>>> >>>>> > 2018-02-19 14:27 GMT+03:00 Aliaksei Nazarenka >>>>> > : >>>>> > >>>>> > > 2018-02-19 14:22:16,942 root Starting server >>>>> > > 2018-02-19 14:22:16,943 root Version: 1.2.5-1 >>>>> > > 2018-02-19 14:22:16,943 root Build date: 20180117090014 >>>>> > > 2018-02-19 14:22:16,944 root Githash: 12b705d >>>>> > > 2018-02-19 14:23:17,250 root From: 10.0.184.20:57674 Request: POST >>>>> > > /v2.0/tokens >>>>> > > 2018-02-19 14:23:17,252 root Request body: >>>>> > > {"auth": {"tenantName": "tenant", "passwordCredentials": >>>>> > > {"username": "admin at internal", "password": ""}}} >>>>> > > 2018-02-19 14:23:17,322 requests.packages.urllib3.connectionpool >>>>> > > Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> > > 2018-02-19 14:23:17,322 requests.packages.urllib3.connectionpool >>>>> > > Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> > > 2018-02-19 14:23:17,836 requests.packages.urllib3.connectionpool >>>>> > > Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> > > 2018-02-19 14:23:17,836 requests.packages.urllib3.connectionpool >>>>> > > Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> > > 2018-02-19 14:23:17,870 root Response code: 200 >>>>> > > 2018-02-19 14:23:17,870 root Response body: {"access": {"token": >>>>> > > {"expires": "2018-02-23T15:23:17Z", "id": "lmceG8s1SROKskN- >>>>> > > H4T93IPwgwSFb8mN7UJ6qz4HObrC2PqNWcSyS_dZGIcax6dEBVIz8H6ShgDX >>>>> l_2fvflbeg"}, >>>>> > > "serviceCatalog": [{"endpoints_links": [], "endpoints": >>>>> > > [{"adminURL": " https://lbn-r-engine-01.mp.local:9696/", "region": >>>>> > > "RegionOne", "id": " 00000000000000000000000000000001", >>>>> > > "internalURL": " https://lbn-r-engine-01.mp.local:9696/", >>>>> > > "publicURL": " https://lbn-r-engine-01.mp.local:9696/"}], "type": >>>>> > > "network", "name": "neutron"}, {"endpoints_links": [], "endpoints": >>>>> > > [{"adminURL": " https://lbn-r-engine-01.mp.local:35357/", >>>>> "region": >>>>> > > "RegionOne", "publicURL": >>>>> > > "https://lbn-r-engine-01.mp.local:35357/", "internalURL": " >>>>> > > https://lbn-r-engine-01.mp.local:35357/", "id": " >>>>> > > 00000000000000000000000000000002"}], "type": "identity", "name": >>>>> > > "keystone"}, {"endpoints_links": [], "endpoints": [{"adminURL": " >>>>> > > https://lbn-r-engine-01.mp.local:8774/v2.1/", "region": >>>>> > > "RegionOne", "publicURL": >>>>> > > "https://lbn-r-engine-01.mp.local:8774/v2.1/", "internalURL": >>>>> > > "https://lbn-r-engine-01.mp.local:8774/v2.1/", "id": " >>>>> > > 00000000000000000000000000000002"}], "type": "compute", "name": >>>>> > > "nova"}], "user": {"username": "admin", "roles_links": [], "id": >>>>> > > "", "roles": [{"name": "admin"}], "name": "admin"}}} 2018-02-19 >>>>> > > 14:23:17,974 root From: 10.0.184.20:43600 Request: >>>>> > > POST /v2.0/routers 2018-02-19 14:23:17,974 root Request body: >>>>> > > {"router":{"name":"test_router","external_gateway_ >>>>> > > info":{"network_id":"17c31685-56ef-428a-94dd-3202bf407d36"," >>>>> > > external_fixed_ips":[{"subnet_id":"c425f071-4b4e-4598-8c56- >>>>> > > d5457a59dac3"}],"enable_snat":0}}} 2018-02-19 14:23:17,980 >>>>> > > requests.packages.urllib3.connectionpool Starting new HTTPS >>>>> > > connection (1): lbn-r-engine-01.mp.local 2018-02-19 14:23:17,980 >>>>> > > requests.packages.urllib3.connectionpool Starting new HTTPS >>>>> > > connection (1): lbn-r-engine-01.mp.local 2018-02-19 14:23:18,391 >>>>> > > root ip_address missing in the external gateway information. >>>>> > > Traceback (most recent call last): >>>>> > > File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", >>>>> > > line 131, in _handle_request >>>>> > > method, path_parts, content) >>>>> > > File >>>>> > > "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", >>>>> line >>>>> > > 175, in handle_request return self.call_response_handler(handler, >>>>> > > content, parameters) File >>>>> > > "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, in >>>>> > > call_response_handler return response_handler(ovn_north, content, >>>>> > > parameters) File >>>>> > > "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", >>>>> line >>>>> > > 205, in post_routers router = nb_db.add_router(received_router) >>>>> > > File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> > > line 57, in wrapper >>>>> > > validate_rest_input(rest_data) >>>>> > > File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> > > line 530, in validate_add_rest_input >>>>> > > RouterMapper._validate_external_gateway_info(rest_data) >>>>> > > File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> > > line 565, in _validate_external_gateway_info >>>>> > > message.format(key=RouterMapper.REST_ROUTER_IP_ADDRESS) >>>>> > > RestDataError >>>>> > > >>>>> > > >>>>> > > It can be seen that there is no external ip for the router. But how >>>>> > > to ask it and where is it done? >>>>> > > >>>>> > > 2018-02-19 14:05 GMT+03:00 Dominik Holler : >>>>> > > >>>>> > >> Hi Alexey, >>>>> > >> can you please change level of logger_root and handler_logfile in >>>>> > >> /etc/ovirt-provider-ovn/logger.conf >>>>> > >> to DEBUG, restart ovirt-provider-ovn, try to create the router >>>>> > >> again and share the logfile with us? >>>>> > >> The logfile has to contain the relevant request, e.g. similar to >>>>> > >> this: >>>>> > >> >>>>> > >> 2018-02-19 11:59:12,477 root From: 192.168.122.79:50084 Request: >>>>> > >> POST /v2.0/routers.json 2018-02-19 11:59:12,477 root Request body: >>>>> > >> {"router": {"external_gateway_info": {"network_id": >>>>> > >> "c1d4f8e3-8b5d-464e-825a-5f615a18a900", "enable_snat": false, >>>>> > >> "external_fixed_ips": [{"subnet_id": >>>>> > >> "08efc369-ff36-4dd4-b5f9-ada86d7724db", "ip_address": >>>>> > >> "10.0.0.2"}]}, "name": "add_router_router", "admin_state_up": >>>>> > >> true}} >>>>> > >> >>>>> > >> Thanks, >>>>> > >> Dominik >>>>> > >> >>>>> > >> >>>>> > >> On Mon, 19 Feb 2018 11:40:02 +0300 >>>>> > >> Aliaksei Nazarenka wrote: >>>>> > >> >>>>> > >> > Good afternoon! >>>>> > >> > With the synchronization of the created networks in manageiq and >>>>> > >> > ovirt everything is OK, thanks a lot! The only nuance - after >>>>> > >> > creating a network or subnet in manageiq, you need to manually >>>>> > >> > update the state after which you can see these items in the >>>>> > >> > list. Is there any way to automate this process? Also, maybe you >>>>> > >> > can help me: when I create a router, I get an error. Unable to >>>>> > >> > create a Network Router "test": undefined method `[] 'for nil: >>>>> > >> > NilClass and in the logs at this point next" 2018-02-19 11: 22: >>>>> > >> > 19,391 root ip_address missing in the external gateway >>>>> > >> > information. Traceback (most recent last call last): File >>>>> > >> > "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line >>>>> > >> > 131, in _handle_request method, path_parts, content) >>>>> > >> > File >>>>> > >> > "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", >>>>> > >> > line 175, in handle_request return self.call_response_handler >>>>> > >> > (handler, content, parameters) File >>>>> > >> > "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, >>>>> in >>>>> > >> > call_response_handler return response_handler (ovn_north, >>>>> > >> > content, parameters) File >>>>> > >> > "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", >>>>> > >> > line 205, in post_routers router = nb_db.add_router >>>>> > >> > (received_router) File >>>>> > >> > "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> line >>>>> > >> > 57, in wrapper validate_rest_input (rest_data) >>>>> > >> > File >>>>> > >> > "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> line >>>>> > >> > 530, in validate_add_rest_input >>>>> > >> > RouterMapper._validate_external_gateway_info (rest_data) File >>>>> > >> > "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> line >>>>> > >> > 565, in _validate_external_gateway_info message.format (key = >>>>> > >> > RouterMapper.REST_ROUTER_IP_ADDRESS) RestDataError " >>>>> > >> > Swears at the missing external ip address of the router. The >>>>> > >> > question is how to set it? >>>>> > >> > >>>>> > >> > 2018-02-16 19:22 GMT+03:00 Dominik Holler : >>>>> > >> > >>>>> > >> > > Hi Alexey, >>>>> > >> > > For the provider ovirt-provider-ovn created by engine-setup >>>>> the >>>>> > >> > > automatic synchronization of networks of cluster with this >>>>> > >> > > provider as default network provider is activated by >>>>> > >> > > engine-setup. >>>>> > >> > > >>>>> > >> > > Please find [1] if you want to activate this feature for other >>>>> > >> > > providers, too. Additional information about controlling the >>>>> > >> > > synchronization are available in [2]. >>>>> > >> > > >>>>> > >> > > Please find my question below. >>>>> > >> > > >>>>> > >> > > [1] >>>>> > >> > > https://bugzilla.redhat.com/attachment.cgi?id=1397090 >>>>> > >> > > >>>>> > >> > > [2] >>>>> > >> > > http://ovirt.github.io/ovirt-engine-api-model/4.2/#types/ >>>>> > >> > > open_stack_network_provider/attributes/auto_sync >>>>> > >> > > >>>>> > >> > > On Fri, 16 Feb 2018 10:00:46 +0200 >>>>> > >> > > Alona Kaplan wrote: >>>>> > >> > > >>>>> > >> > > > Hi Dominik, >>>>> > >> > > > >>>>> > >> > > > Can you please help Alexey? >>>>> > >> > > > >>>>> > >> > > > Thanks, >>>>> > >> > > > Alona. >>>>> > >> > > > >>>>> > >> > > > On Feb 16, 2018 09:48, "Aliaksei Nazarenka" >>>>> > >> > > > wrote: >>>>> > >> > > > >>>>> > >> > > > Hello! >>>>> > >> > > > I read this - " >>>>> > >> > > > Dominik Holler 2018-01-25 10:45:09 EST >>>>> > >> > > > >>>>> > >> > > > Currently, the property is only available in rest-api and >>>>> not >>>>> > >> > > > available in webadmin. For backward compatibility, the >>>>> > >> > > > property is set to 'disabled' by default in rest-api and >>>>> > >> > > > webadmin. If you think the property should be available in >>>>> > >> > > > webadmin, please create a bug with a proposed default value >>>>> > >> > > > to track this." >>>>> > >> > > > >>>>> > >> > > > and i understand this feature (auto add ovn network in >>>>> > >> > > > ovirt) of now. How can i do to on it? I read all comments, >>>>> > >> > > > but strangely - most of the files either do not exist for me >>>>> > >> > > > or are in other places and >>>>> > >> > > >>>>> > >> > > Can you help me finding this comments? >>>>> > >> > > >>>>> > >> > > > already have the current version. Could you tell me >>>>> > >> > > > specifically where this function is turned on? I will >>>>> > >> > > > repeat, I use Ovirt engine >>>>> > >> > > > 4.2.2.1-0.0.master.20180214165528.git38ff5af.el7.centos >>>>> > >> > > > >>>>> > >> > > > >>>>> > >> > > > >>>>> > >> > > > 2018-02-15 18:03 GMT+03:00 Alona Kaplan >>>>> > >> > > > : >>>>> > >> > > > > Currently, AFAIK there is no request to add this >>>>> > >> > > > > functionality to manageiq. You're welcome to open a bug to >>>>> > >> > > > > request it. Anyway, you can easily attach ovn networks to >>>>> > >> > > > > vms using ovirt. >>>>> > >> > > > > >>>>> > >> > > > > On Feb 15, 2018 16:11, "Aliaksei Nazarenka" >>>>> > >> > > > > wrote: >>>>> > >> > > > > >>>>> > >> > > > >> Is it planned to add this functionality? >>>>> > >> > > > >> >>>>> > >> > > > >> 2018-02-15 17:10 GMT+03:00 Alona Kaplan >>>>> > >> > > > >> : >>>>> > >> > > > >>> >>>>> > >> > > > >>> >>>>> > >> > > > >>> On Thu, Feb 15, 2018 at 4:03 PM, Aliaksei Nazarenka < >>>>> > >> > > > >>> aliaksei.nazarenka at gmail.com> wrote: >>>>> > >> > > > >>> >>>>> > >> > > > >>>> and how i can change network in the created VM? >>>>> > >> > > > >>>> >>>>> > >> > > > >>> >>>>> > >> > > > >>> It is not possible via manageiq. Only via ovirt. >>>>> > >> > > > >>> >>>>> > >> > > > >>> >>>>> > >> > > > >>>> >>>>> > >> > > > >>>> Sorry for my intrusive questions))) >>>>> > >> > > > >>>> >>>>> > >> > > > >>>> 2018-02-15 16:51 GMT+03:00 Aliaksei Nazarenka < >>>>> > >> > > > >>>> aliaksei.nazarenka at gmail.com>: >>>>> > >> > > > >>>> >>>>> > >> > > > >>>>> ovirt-provider-ovn-1.2.7-0.201 >>>>> 80213232754.gitebd60ad.el7. >>>>> > >> > > centos.noarch >>>>> > >> > > > >>>>> on hosted-engine >>>>> > >> > > > >>>>> ovirt-provider-ovn-driver-1.2.5-1.el7.centos.noarch >>>>> on >>>>> > >> > > > >>>>> ovirt hosts >>>>> > >> > > > >>>>> >>>>> > >> > > > >>>>> 2018-02-15 16:40 GMT+03:00 Alona Kaplan >>>>> > >> > > > >>>>> : >>>>> > >> > > > >>>>>> >>>>> > >> > > > >>>>>> >>>>> > >> > > > >>>>>> On Thu, Feb 15, 2018 at 3:36 PM, Aliaksei Nazarenka >>>>> > >> > > > >>>>>> < aliaksei.nazarenka at gmail.com> wrote: >>>>> > >> > > > >>>>>> >>>>> > >> > > > >>>>>>> when i try to create network router, i see this >>>>> > >> > > > >>>>>>> message: *Unable to create Network Router >>>>> > >> > > > >>>>>>> "test_router": undefined method `[]' for >>>>> > >> > > > >>>>>>> nil:NilClass* >>>>> > >> > > > >>>>>> >>>>> > >> > > > >>>>>> What ovn-provider version you're using? Can you >>>>> please >>>>> > >> > > > >>>>>> attach the ovn provider log >>>>> > >> > > > >>>>>> ( /var/log/ovirt-provider-ovn.log)? >>>>> > >> > > > >>>>>> >>>>> > >> > > > >>>>>> >>>>> > >> > > > >>>>>>> >>>>> > >> > > > >>>>>>> 2018-02-15 16:20 GMT+03:00 Aliaksei Nazarenka < >>>>> > >> > > > >>>>>>> aliaksei.nazarenka at gmail.com>: >>>>> > >> > > > >>>>>>> >>>>> > >> > > > >>>>>>>> Big Thank you! This work! But... Networks are >>>>> > >> > > > >>>>>>>> created, but I do not see them in the ovirt >>>>> > >> > > > >>>>>>>> manager, but through the ovn-nbctl command, I see >>>>> > >> > > > >>>>>>>> all the networks. And maybe you can tell me how to >>>>> > >> > > > >>>>>>>> assign a VM network from Manageiq? >>>>> > >> > > > >>>>>>>> >>>>> > >> > > > >>>>>>>> 2018-02-15 15:01 GMT+03:00 Alona Kaplan >>>>> > >> > > > >>>>>>>> : >>>>> > >> > > > >>>>>>>> >>>>> > >> > > > >>>>>>>>> >>>>> > >> > > > >>>>>>>>> >>>>> > >> > > > >>>>>>>>> On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei >>>>> > >> > > > >>>>>>>>> Nazarenka < aliaksei.nazarenka at gmail.com> wrote: >>>>> > >> > > > >>>>>>>>> >>>>> > >> > > > >>>>>>>>>> Error - 1 Minute Ago >>>>> > >> > > > >>>>>>>>>> undefined method `orchestration_stacks' for >>>>> > >> > > > >>>>>>>>>> #>>>> > >> :InfraManager:0x00000007bf9288> >>>>> > >> > > > >>>>>>>>>> - I get this message if I try to create a network >>>>> > >> > > > >>>>>>>>>> of overts and then try to check the status of the >>>>> > >> > > > >>>>>>>>>> network manager. >>>>> > >> > > > >>>>>>>>>> >>>>> > >> > > > >>>>>>>>> >>>>> > >> > > > >>>>>>>>> It is the same bug. >>>>> > >> > > > >>>>>>>>> You need to apply the fixes in >>>>> > >> > > > >>>>>>>>> https://github.com/ManageIQ/ma >>>>> > >> > > > >>>>>>>>> nageiq-providers-ovirt/pull/198/files to make it >>>>> > >> > > > >>>>>>>>> work. The best option is to upgrade your version. >>>>> > >> > > > >>>>>>>>> >>>>> > >> > > > >>>>>>>>> >>>>> > >> > > > >>>>>>>>>> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka < >>>>> > >> > > > >>>>>>>>>> aliaksei.nazarenka at gmail.com>: >>>>> > >> > > > >>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>> I tried to make changes to the file >>>>> > >> > > > >>>>>>>>>>> refresher_ovn_provider.yml - changed the >>>>> > >> > > > >>>>>>>>>>> passwords, corrected the names of the names, but >>>>> > >> > > > >>>>>>>>>>> it was not successful. >>>>> > >> > > > >>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka < >>>>> > >> > > > >>>>>>>>>>> aliaksei.nazarenka at gmail.com>: >>>>> > >> > > > >>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>> Hi! >>>>> > >> > > > >>>>>>>>>>>> I'm use oVirt 4.2.2 + Manageiq >>>>> > >> > > > >>>>>>>>>>>> gaprindashvili-1.2018012514301 9_1450f27 >>>>> > >> > > > >>>>>>>>>>>> After i set this commits (upstream - >>>>> > >> > > > >>>>>>>>>>>> https://bugzilla.redhat.com/1542063) i no saw >>>>> > >> > > > >>>>>>>>>>>> changes. >>>>> > >> > > > >>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan >>>>> > >> > > > >>>>>>>>>>>> : >>>>> > >> > > > >>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>>> Hi, >>>>> > >> > > > >>>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>>> What version of manageiq you are using? >>>>> > >> > > > >>>>>>>>>>>>> We had a bug >>>>> > >> > > > >>>>>>>>>>>>> https://bugzilla.redhat.com/1542152 (upstream >>>>> > >> > > > >>>>>>>>>>>>> - https://bugzilla.redhat.com/1542063) that >>>>> > >> > > > >>>>>>>>>>>>> was fixed in version 5.9.0.20 >>>>> > >> > > > >>>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>>> Please let me know it upgrading the version >>>>> > >> > > > >>>>>>>>>>>>> helped you. >>>>> > >> > > > >>>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>>> Thanks, >>>>> > >> > > > >>>>>>>>>>>>> Alona. >>>>> > >> > > > >>>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei >>>>> > >> > > > >>>>>>>>>>>>> Nazarenka < aliaksei.nazarenka at gmail.com> >>>>> > >> > > > >>>>>>>>>>>>> wrote: >>>>> > >> > > > >>>>>>>>>>>>>> Good afternoon! >>>>> > >> > > > >>>>>>>>>>>>>> I read your article - >>>>> > >> > > > >>>>>>>>>>>>>> https://www.ovirt.org/develop/ >>>>> > >> > > > >>>>>>>>>>>>>> release-management/features/ne >>>>> twork/manageiq_ovn/. >>>>> > >> > > > >>>>>>>>>>>>>> I have only one question: how to create a >>>>> > >> > > > >>>>>>>>>>>>>> network or subnet in Manageiq + ovirt 4.2.1. >>>>> > >> > > > >>>>>>>>>>>>>> When I try to create a network, I need to >>>>> > >> > > > >>>>>>>>>>>>>> select a tenant, but there is nothing that I >>>>> > >> > > > >>>>>>>>>>>>>> could choose. How can it be? >>>>> > >> > > > >>>>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>>>> Sincerely. Alexey Nazarenko >>>>> > >> > > > >>>>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>>> >>>>> > >> > > > >>>>>>>>>> >>>>> > >> > > > >>>>>>>>> >>>>> > >> > > > >>>>>>>> >>>>> > >> > > > >>>>>>> >>>>> > >> > > > >>>>>> >>>>> > >> > > > >>>>> >>>>> > >> > > > >>>> >>>>> > >> > > > >>> >>>>> > >> > > > >> >>>>> > >> > > >>>>> > >> > > >>>>> > >> >>>>> > >> >>>>> > > >>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Wed Feb 21 16:35:05 2018 From: ahadas at redhat.com (Arik Hadas) Date: Wed, 21 Feb 2018 18:35:05 +0200 Subject: [ovirt-users] problem importing ova vm In-Reply-To: <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> Message-ID: On Wed, Feb 21, 2018 at 6:03 PM, Ji?? Sl??ka wrote: > On 02/21/2018 03:43 PM, Ji?? Sl??ka wrote: > > On 02/20/2018 11:09 PM, Arik Hadas wrote: > >> > >> > >> On Tue, Feb 20, 2018 at 6:37 PM, Ji?? Sl??ka >> > wrote: > >> > >> On 02/20/2018 03:48 PM, Arik Hadas wrote: > >> > > >> > > >> > On Tue, Feb 20, 2018 at 3:49 PM, Ji?? Sl??ka > >> > >> wrote: > >> > > >> > Hi Arik, > >> > > >> > On 02/20/2018 01:22 PM, Arik Hadas wrote: > >> > > > >> > > > >> > > On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka < > jiri.slezka at slu.cz > >> > > >> > > > >> >>> wrote: > >> > > > >> > > Hi, > >> > > > >> > > > >> > > Hi Ji??, > >> > > > >> > > > >> > > > >> > > I would like to try import some ova files into our oVirt > >> instance [1] > >> > > [2] but I facing problems. > >> > > > >> > > I have downloaded all ova images into one of hosts > >> (ovirt01) into > >> > > direcory /ova > >> > > > >> > > ll /ova/ > >> > > total 6532872 > >> > > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 > >> HAAS-hpcowrie.ovf > >> > > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 > >> HAAS-hpdio.ova > >> > > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 > >> HAAS-hpjdwpd.ova > >> > > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 > >> HAAS-hptelnetd.ova > >> > > -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 > >> HAAS-hpuchotcp.ova > >> > > -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 > >> HAAS-hpuchoudp.ova > >> > > -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 > >> HAAS-hpuchoweb.ova > >> > > > >> > > Then I tried to import them - from host ovirt01 and > >> directory /ova but > >> > > spinner spins infinitly and nothing is happen. > >> > > > >> > > > >> > > And does it work when you provide a path to the actual ova > >> file, i.e., > >> > > /ova/HAAS-hpdio.ova, rather than to the directory? > >> > > >> > this time it ends with "Failed to load VM configuration from > >> OVA file: > >> > /ova/HAAS-hpdio.ova" error. > >> > > >> > > >> > Note that the logic that is applied on a specified folder is "try > >> > fetching an 'ova folder' out of the destination folder" rather > than > >> > "list all the ova files inside the specified folder". It seems > >> that you > >> > expected the former output since there are no disks in that > >> folder, right? > >> > >> yes, It would be more user friendly to list all ova files and then > >> select which one to import (like listing all vms in vmware import) > >> > >> Maybe description of path field in manager should be "Path to ova > file" > >> instead of "Path" :-) > >> > >> > >> Sorry, I obviously meant 'latter' rather than 'former' before.. > >> Yeah, I agree that would be better, at least until listing the OVA files > >> in the folder is implemented (that was the original plan, btw) - could > >> you please file a bug? > > > > yes, sure > > > > > >> > > I cannot see anything relevant in vdsm log of host > ovirt01. > >> > > > >> > > In the engine.log of our standalone ovirt manager is > just this > >> > > relevant line > >> > > > >> > > 2018-02-20 12:35:04,289+01 INFO > >> > > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] > (default > >> > > task-31) [458990a7-b054-491a-904e-5c4fe44892c4] > Executing Ansible > >> > > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > >> > > [/usr/bin/ansible-playbook, > >> > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > >> > > --inventory=/tmp/ansible-inventory8237874608161160784, > >> > > --extra-vars=ovirt_query_ova_path=/ova, > >> > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] > [Logfile: > >> > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > >> > >> > >> > > >> > > >> > >> > >> >>.slu.cz.log] > >> > > > >> > > also there are two ansible processes which are still > running > >> > (and makes > >> > > heavy load on system (load 9+ and growing, it looks > like it > >> > eats all the > >> > > memory and system starts swapping)) > >> > > > >> > > ovirt 32087 3.3 0.0 332252 5980 ? Sl > >> 12:35 0:41 > >> > > /usr/bin/python2 /usr/bin/ansible-playbook > >> > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > >> > > --inventory=/tmp/ansible-inventory8237874608161160784 > >> > > --extra-vars=ovirt_query_ova_path=/ova > >> > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > >> > > ovirt 32099 57.5 78.9 15972880 11215312 ? R > >> 12:35 11:52 > >> > > /usr/bin/python2 /usr/bin/ansible-playbook > >> > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > >> > > --inventory=/tmp/ansible-inventory8237874608161160784 > >> > > --extra-vars=ovirt_query_ova_path=/ova > >> > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > >> > > > >> > > playbook looks like > >> > > > >> > > - hosts: all > >> > > remote_user: root > >> > > gather_facts: no > >> > > > >> > > roles: > >> > > - ovirt-ova-query > >> > > > >> > > and it looks like it only runs query_ova.py but on all > >> hosts? > >> > > > >> > > > >> > > No, the engine provides ansible the host to run on when it > >> > executes the > >> > > playbook. > >> > > It would only be executed on the selected host. > >> > > > >> > > > >> > > > >> > > How does this work? ...or should it work? > >> > > > >> > > > >> > > It should, especially that part of querying the OVA and is > >> supposed to > >> > > be really quick. > >> > > Can you please share the engine log and > >> > > > >> > > >> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > >> > >> > >> > > >> > > >> > >> > >> >>.slu.cz.log ? > >> > > >> > engine log is here: > >> > > >> > https://pastebin.com/nWWM3UUq > >> > > >> > > >> > Thanks. > >> > Alright, so now the configuration is fetched but its processing > fails. > >> > We fixed many issues in this area recently, but it appears that > >> > something is wrong with the actual size of the disk within the > ovf file > >> > that resides inside this ova file. > >> > Can you please share that ovf file that resides > inside /ova/HAAS-hpdio.ova? > >> > >> file HAAS-hpdio.ova > >> HAAS-hpdio.ova: POSIX tar archive (GNU) > >> > >> [root at ovirt01 backup]# tar xvf HAAS-hpdio.ova > >> HAAS-hpdio.ovf > >> HAAS-hpdio-disk001.vmdk > >> > >> file HAAS-hpdio.ovf is here: > >> > >> https://pastebin.com/80qAU0wB > >> > >> > >> Thanks again. > >> So that seems to be a VM that was exported from Virtual Box, right? > >> They don't do anything that violates the OVF specification but they do > >> some non-common things that we don't anticipate: > > > > yes, it is most likely ova from VirtualBox > > > >> First, they don't specify the actual size of the disk and the current > >> code in oVirt relies on that property. > >> There is a workaround for this though: you can extract an OVA file, edit > >> its OVF configuration - adding ovf:populatedSize="X" (and change > >> ovf:capacity as I'll describe next) to the Disk element inside the > >> DiskSection and pack the OVA again (tar cvf >> X is either: > >> 1. the actual size of the vmdk file + some buffer (iirc, we used to take > >> 15% of extra space for the conversion) > >> 2. if you're using a file storage or you don't mind consuming more > >> storage space on your block storage, simply set X to the virtual size of > >> the disk (in bytes) as indicated by the ovf:capacity filed, e.g., > >> ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova. > >> > >> Second, the virtual size (indicated by ovf:capacity) is specified in > >> bytes. The specification says that the default unit of allocation shall > >> be bytes, but practically every OVA file that I've ever saw specified it > >> in GB and the current code in oVirt kind of assumes that this is the > >> case without checking the ovf:capacityAllocationUnits attribute that > >> could indicate the real unit of allocation [1]. > >> Anyway, long story short, the virtual size of the disk should currently > >> be specified in GB, e.g., ovf:populatedSize="20" in the case of > >> HAAS-hpdio.ova. > > > > wow, thanks for this excellent explanation. I have changed this in ovf > file > > > > ... > > > ... > > > > then I was able to import this mofified ova file (HAAS-hpdio_new.ova). > > Interesting thing is that the vm was shown in vm list for while (with > > state down with lock and status was initializing). After while this vm > > disapeared :-o > > > > I am going to test it again and collect some logs... > > there are interesting logs in /var/log/vdsm/import/ at the host used for > import > > http://mirror.slu.cz/tmp/ovirt-import.tar.bz2 > > first of them describes situation where I chose thick provisioning, > second situation with thin provisioning > > interesting part is I believe > > libguestfs: command: run: qemu-img > libguestfs: command: run: \ create > libguestfs: command: run: \ -f qcow2 > libguestfs: command: run: \ -o preallocation=off,compat=0.10 > libguestfs: command: run: \ > /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570- > f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/ > 9edcccbc-b244-4b94-acd3-3c8ee12bbbec > libguestfs: command: run: \ 21474836480 > Formatting > '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd- > a570-f37fa986a772/images/d44e1890-3e42-420b-939c- > dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec', > fmt=qcow2 size=21474836480 compat=0.10 encryption=off cluster_size=65536 > preallocation=off lazy_refcounts=off refcount_bits=16 > libguestfs: trace: vdsm_disk_create: disk_create = 0 > qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2' > '/var/tmp/v2vovl2dccbd.qcow2' > '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd- > a570-f37fa986a772/images/d44e1890-3e42-420b-939c- > dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec' > qemu-img: error while writing sector 1000960: No space left on device > > virt-v2v: error: qemu-img command failed, see earlier errors > > > Sorry again, I made a mistake in: "Anyway, long story short, the virtual size of the disk should currently be specified in GB, e.g., ovf:populatedSize="20" in the case of HAAS-hpdio.ova." I should have write ovf:capacity="20". So if you wish the actual size of the disk to be 20GB (which means the disk is preallocated), the disk element should be set with: > > > >> That should do it. If not, please share the OVA file and I will examine > >> it in my environment. > > > > original file is at > > > > https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova > > > >> > >> [1] https://github.com/oVirt/ovirt-engine/blob/master/ > backend/manager/modules/utils/src/main/java/org/ovirt/ > engine/core/utils/ovf/OvfOvaReader.java#L220 > >> > >> > >> > >> > file > >> > > >> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > >> > >> > >> > > >> > in the fact does not exists (nor folder > /var/log/ovirt-engine/ova/) > >> > > >> > > >> > This issue is also resolved in 4.2.2. > >> > In the meantime, please create the /var/log/ovirt-engine/ova/ > folder > >> > manually and make sure its permissions match the ones of the other > >> > folders in /var/log/ovirt-engine. > >> > >> ok, done. After another try there is this log file > >> > >> /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220173005-ovirt01.net > >> .slu.cz.log > >> > >> https://pastebin.com/M5J44qur > >> > >> > >> Is it the log of the execution of the ansible playbook that was provided > >> with a path to the /ova folder? > >> I'm interested in that in order to see how comes that its execution > >> never completed. > > > > well, I dont think so, it is log from import with full path to ova file > > > > > > > >> > >> > >> > >> > >> > Cheers, > >> > > >> > Jiri Slezka > >> > > >> > > > >> > > > >> > > > >> > > I am using latest 4.2.1.7-1.el7.centos version > >> > > > >> > > Cheers, > >> > > Jiri Slezka > >> > > > >> > > > >> > > [1] https://haas.cesnet.cz/#!index.md > >> > >> index.md>> > >> > > https://haas.cesnet.cz/#!index.md> > >> > >> >> - Cesnet HAAS > >> > > [2] https://haas.cesnet.cz/downloads/release-01/ > >> > >> > >> > > >> > > >> > >> > >> >> - Image repository > >> > > > >> > > > >> > > _______________________________________________ > >> > > Users mailing list > >> > > Users at ovirt.org Users at ovirt.org > >> > > >> > > >> >> > >> > > http://lists.ovirt.org/mailman/listinfo/users > >> > >> > >> > > >> > > >> > >> > >> >> > >> > > > >> > > > >> > > >> > > >> > > >> > _______________________________________________ > >> > Users mailing list > >> > Users at ovirt.org > >> > > >> > http://lists.ovirt.org/mailman/listinfo/users > >> > >> > >> > > >> > > >> > > >> > >> > >> > >> _______________________________________________ > >> Users mailing list > >> Users at ovirt.org > >> http://lists.ovirt.org/mailman/listinfo/users > >> > >> > >> > > > > > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jeremy_Tourville at hotmail.com Wed Feb 21 01:05:57 2018 From: Jeremy_Tourville at hotmail.com (Jeremy Tourville) Date: Wed, 21 Feb 2018 01:05:57 +0000 Subject: [ovirt-users] Spice Client Connection Issues Using aSpice In-Reply-To: <1519116986.1980.6.camel@inparadise.se> References: , <1519116986.1980.6.camel@inparadise.se> Message-ID: Hello everyone, I can confirm that spice is working for me when I launch it using the .vv file. I have virt viewer installed on my Windows pc and it works without issue. I can also launch spice when I use movirt without any issues. I examined the contents of the .vv file to see what the certificate looks like. I can confirm that the certficate in the .vv file is the same as the file I downloaded in step 1 of my directions. I reviewed the PKI reference (https://www.ovirt.org/develop/release-management/features/infra/pki/) for a second time and I see the same certificate located in different locations. For example, all these locations contain the same certificate- * https://ovirtengine.lan/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA * /etc/pki/vdsm/certs/cacert.pem * /etc/pki/vdsm/libvirt-spice/ca-cert.pem * /etc/pki/CA/cacert.pem This is the certificate I am using to configure my aSpice client. Can someone answer the question from my original post? The PKI reference says for version 3.2 and 3.3. Is the documentation still correct for version 4.2? At this point I am trying to find out where the problems exists - ie. #1 Is my client not configured correctly? #2 Am I using the wrong cert? (I think I am using the correct cert based on the research I listed above) #3 Does my client need to be able to send a pasword? (based on the contents of the .vv file, I'd have to guess yes) Also my xml file for the VM in question contains this: Please note: I did not perform any hand configuration of the xml file, it was all done by the system using the UI. #4 Can I configure a file on the system to turn off ticketing and passwords and see if that makes a difference, if so, what file? #5 Can someone explain this error? 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 alert internal error:s3_pkt.c:1493:SSL alert number 80 ((null):27595): Spice-Warning **:reds_stream.c:379:reds_stream_ssl_accept: SSL_accept failed, error=1 What I know about it is this: According to RFC 2246, the alert number 80 represents an "internal error". Here is the description from the RFC internal_error: An internal error unrelated to the peer or the correctness of the protocol makes it impossible to continue (such as a memory allocation failure). This message is always fatal. #6 Could this error be related to any of #1 through #4 above? Thanks! ________________________________ From: Karli Sj?berg Sent: Tuesday, February 20, 2018 2:56 AM To: Tomas Jelinek; Jeremy Tourville Cc: users at ovirt.org Subject: Re: [ovirt-users] Spice Client Connection Issues Using aSpice On Tue, 2018-02-20 at 08:59 +0100, Tomas Jelinek wrote: > > > On Mon, Feb 19, 2018 at 7:10 PM, Jeremy Tourville otmail.com> wrote: > > Hi Tomas, > > To answer your question, yes I am really trying to use aSpice. > > > > I appreciate your suggestion. I'm not sure if it meets my > > objective. Maybe our goals are different? It seems to me that > > movirt is built around portable management of the ovirt > > environment. I am attempting to provide a VDI type experience for > > running a vm. My goal is to run a lab environment with 30 > > chromebooks loaded with a spice clent. The spice client would of > > course connect to the 30 vms running Kali and each session would be > > independent of each other. > > > > yes, it looks like a different use case > > > I did a little further testing with a different client. (spice > > plugin for chrome). When I attempted to connect using that client > > I got a slightly different error message. The message still seemed > > to be of the same nature- i.e.: there is a problem with SSL > > protocol and communication. > > > > Are you suggesting that movirt can help set up the proper > > certficates and config the vms to use spice? Thanks! > > > > moVirt has been developed for quite some time and works pretty well, > this is why I recommended it. But anyway, you have a different use > case. > > What I think the issue is, is that oVirt can have different CAs set > for console communication and for API. And I think you are trying to > configure aSPICE to use the one for API. > > What moVirt does to make sure it is using the correct CA to put into > the aSPICE is that it downloads the .vv file of the VM (e.g. you can > just connect to console using webadmin and save the .vv file > somewhere), parse it and use the CA= part from it as a certificate. > This one is guaranteed to be the correct one. > > For more details about what else it takes from the .vv file you can > check here: > the parsing: https://github.com/oVirt/moVirt/blob/master/moVirt/src/m > ain/java/org/ovirt/mobile/movirt/rest/client/httpconverter/VvFileHttp > MessageConverter.java > configuration of aSPICE: https://github.com/oVirt/moVirt/blob/master/ > moVirt/src/main/java/org/ovirt/mobile/movirt/util/ConsoleHelper.java > > enjoy :) Feels to me like OP should try to get it working _any_ "normal" way before trying to get the special use case application working? Like trying to run before learning to crawl, if that makes sense? I would suggest just logging in to webadmin with a regular PC and trying to get a SPICE console with remote-viewer to begin with. Then, once that works, try to get a SPICE console working through moVirt with aSPICE on an Android phone, or one of the Chromebooks you have to play with before going into production. Once that?s settled and you know it should work the way you normally access it, you can start playing with your special use case application. Hope it helps! /K > > > > > From: Tomas Jelinek > > Sent: Monday, February 19, 2018 4:19 AM > > To: Jeremy Tourville > > Cc: users at ovirt.org > > Subject: Re: [ovirt-users] Spice Client Connection Issues Using > > aSpice > > > > > > > > On Sun, Feb 18, 2018 at 5:32 PM, Jeremy Tourville > @hotmail.com> wrote: > > > Hello, > > > I am having trouble connecting to my guest vm (Kali Linux) which > > > is running spice. My engine is running version: 4.2.1.7- > > > 1.el7.centos. > > > I am using oVirt Node as my host running version: 4.2.1.1. > > > > > > I have taken the following steps to try and get everything > > > running properly. > > > Download the root CA certificate https://ovirtengine.lan/ovirt-en > > > gine/services/pki-resource?resource=ca-certificate&format=X509- > > > PEM-CA > > > Edit the vm and define the graphical console entries. Video type > > > is set to QXL, Graphics protocol is spice, USB support is > > > enabled. > > > Install the guest agent in Debian per the instructions here - htt > > > ps://www.ovirt.org/documentation/how-to/guest-agent/install-the- > > > guest-agent-in-debian/ It is my understanding that installing > > > the guest agent will also install the virt IO device drivers. > > > Install the spice-vdagent per the instructions here - https://www > > > .ovirt.org/documentation/how-to/guest-agent/install-the-spice- > > > guest-agent/ > > > On the aSpice client I have imported the CA certficate from step > > > 1 above. I defined the connection using the IP of my Node and > > > TLS port 5901. > > > > are you really using aSPICE client (e.g. the android SPICE > > client?). If yes, maybe you want to try to open it using moVirt (ht > > tps://play.google.com/store/apps/details?id=org.ovirt.mobile.movirt > > &hl=en) which delegates the console to aSPICE but configures > > everything including the certificates on it. Should be much simpler > > than configuring it by hand.. > > > > > To troubleshoot my connection issues I confirmed the port being > > > used to listen. > > > virsh # domdisplay Kali > > > spice://172.30.42.12?tls-port=5901 > > > > > > I see the following when attempting to connect. > > > tail -f /var/log/libvirt/qemu/Kali.log > > > > > > 140400191081600:error:14094438:SSL routines:ssl3_read_bytes:tlsv1 > > > alert internal error:s3_pkt.c:1493:SSL alert number 80 > > > ((null):27595): Spice-Warning **: > > > reds_stream.c:379:reds_stream_ssl_accept: SSL_accept failed, > > > error=1 > > > > > > I came across some documentation that states in the caveat > > > section "Certificate of spice SSL should be separate > > > certificate." > > > https://www.ovirt.org/develop/release-management/features/infra/p > > > ki/ > > > > > > Is this still the case for version 4? The document references > > > version 3.2 and 3.3. If so, how do I generate a new certificate > > > for use with spice? Please let me know if you require further > > > info to troubleshoot, I am happy to provide it. Many thanks in > > > advance. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > Users mailing list > > > Users at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/users Users Info Page - lists.ovirt.org Mailing Lists lists.ovirt.org If you have a question about oVirt, this is where you can start getting answers. To see the collection of prior postings to the list, visit the Users Archives. > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users Users Info Page - lists.ovirt.org Mailing Lists lists.ovirt.org If you have a question about oVirt, this is where you can start getting answers. To see the collection of prior postings to the list, visit the Users Archives. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.slezka at slu.cz Wed Feb 21 17:10:27 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Wed, 21 Feb 2018 18:10:27 +0100 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> Message-ID: On 02/21/2018 05:35 PM, Arik Hadas wrote: > > > On Wed, Feb 21, 2018 at 6:03 PM, Ji?? Sl??ka > wrote: > > On 02/21/2018 03:43 PM, Ji?? Sl??ka wrote: > > On 02/20/2018 11:09 PM, Arik Hadas wrote: > >> > >> > >> On Tue, Feb 20, 2018 at 6:37 PM, Ji?? Sl??ka > >> >> wrote: > >> > >>? ? ?On 02/20/2018 03:48 PM, Arik Hadas wrote: > >>? ? ?> > >>? ? ?> > >>? ? ?> On Tue, Feb 20, 2018 at 3:49 PM, Ji?? Sl??ka > > > > >>? ? ?> > >>> wrote: > >>? ? ?> > >>? ? ?>? ? ?Hi Arik, > >>? ? ?> > >>? ? ?>? ? ?On 02/20/2018 01:22 PM, Arik Hadas wrote: > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka > > > > >>? ? ? > >> > >>? ? ?>? ? ?> > > >>? ? ? > >>>> wrote: > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?Hi, > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> Hi Ji??, > >>? ? ?>? ? ?> ? > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?I would like to try import some ova files into > our oVirt > >>? ? ?instance [1] > >>? ? ?>? ? ?>? ? ?[2] but I facing problems. > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?I have downloaded all ova images into one of hosts > >>? ? ?(ovirt01) into > >>? ? ?>? ? ?>? ? ?direcory /ova > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?ll /ova/ > >>? ? ?>? ? ?>? ? ?total 6532872 > >>? ? ?>? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 > >>? ? ?HAAS-hpcowrie.ovf > >>? ? ?>? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 > >>? ? ?HAAS-hpdio.ova > >>? ? ?>? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 846736896 Feb 16 16:22 > >>? ? ?HAAS-hpjdwpd.ova > >>? ? ?>? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 891043328 Feb 16 16:23 > >>? ? ?HAAS-hptelnetd.ova > >>? ? ?>? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 908222464 Feb 16 16:23 > >>? ? ?HAAS-hpuchotcp.ova > >>? ? ?>? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 880643072 Feb 16 16:24 > >>? ? ?HAAS-hpuchoudp.ova > >>? ? ?>? ? ?>? ? ?-rw-r--r--. 1 vdsm kvm? 890833920 Feb 16 16:24 > >>? ? ?HAAS-hpuchoweb.ova > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?Then I tried to import them - from host ovirt01 and > >>? ? ?directory /ova but > >>? ? ?>? ? ?>? ? ?spinner spins infinitly and nothing is happen. > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> And does it work when you provide a path to the > actual ova > >>? ? ?file, i.e., > >>? ? ?>? ? ?> /ova/HAAS-hpdio.ova, rather than to the directory? > >>? ? ?> > >>? ? ?>? ? ?this time it ends with "Failed to load VM configuration > from > >>? ? ?OVA file: > >>? ? ?>? ? ?/ova/HAAS-hpdio.ova" error.? > >>? ? ?> > >>? ? ?> > >>? ? ?> Note that the logic that is applied on a specified folder > is "try > >>? ? ?> fetching an 'ova folder' out of the destination folder" > rather than > >>? ? ?> "list all the ova files inside the specified folder". It seems > >>? ? ?that you > >>? ? ?> expected the former output since there are no disks in that > >>? ? ?folder, right? > >> > >>? ? ?yes, It would be more user friendly to list all ova files and > then > >>? ? ?select which one to import (like listing all vms in vmware > import) > >> > >>? ? ?Maybe description of path field in manager should be "Path to > ova file" > >>? ? ?instead of "Path" :-) > >> > >> > >> Sorry, I obviously meant 'latter' rather than 'former' before.. > >> Yeah, I agree that would be better, at least until listing the > OVA files > >> in the folder is implemented (that was the original plan, btw) - > could > >> you please file a bug? > > > > yes, sure > > > > > >>? ? ?>? ? ?>? ? ?I cannot see anything relevant in vdsm log of > host ovirt01. > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?In the engine.log of our standalone ovirt manager > is just this > >>? ? ?>? ? ?>? ? ?relevant line > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?2018-02-20 12:35:04,289+01 INFO > >>? ? ?>? ? ?>? ? > ?[org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] (default > >>? ? ?>? ? ?>? ? ?task-31) [458990a7-b054-491a-904e-5c4fe44892c4] > Executing Ansible > >>? ? ?>? ? ?>? ? ?command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > >>? ? ?>? ? ?>? ? ?[/usr/bin/ansible-playbook, > >>? ? ?>? ? ?>? ? > ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > >>? ? ?>? ? ?>? ? > ?--inventory=/tmp/ansible-inventory8237874608161160784, > >>? ? ?>? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova, > >>? ? ?>? ? ?>? ? > ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: > >>? ? ?>? ? ?>? ? > ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > >>? ? ? > > >>? ? ?>? ? > ? > >>? ? ? >> > >>? ? ?>? ? ?>? ? ? > >>? ? ? > > >>? ? ?>? ? ? > >>? ? ? >>>.slu.cz.log] > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?also there are two ansible processes which are > still running > >>? ? ?>? ? ?(and makes > >>? ? ?>? ? ?>? ? ?heavy load on system (load 9+ and growing, it > looks like it > >>? ? ?>? ? ?eats all the > >>? ? ?>? ? ?>? ? ?memory and system starts swapping)) > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?ovirt? ? 32087? 3.3? 0.0 332252? 5980 ?? ? ? ? Sl? > >>? ? ??12:35? ?0:41 > >>? ? ?>? ? ?>? ? ?/usr/bin/python2 /usr/bin/ansible-playbook > >>? ? ?>? ? ?>? ? > ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > >>? ? ?>? ? ?>? ? ?--inventory=/tmp/ansible-inventory8237874608161160784 > >>? ? ?>? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova > >>? ? ?>? ? ?>? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > >>? ? ?>? ? ?>? ? ?ovirt? ? 32099 57.5 78.9 15972880 11215312 ?? ?R? ? > >>? ? ?12:35? 11:52 > >>? ? ?>? ? ?>? ? ?/usr/bin/python2 /usr/bin/ansible-playbook > >>? ? ?>? ? ?>? ? > ?--private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > >>? ? ?>? ? ?>? ? ?--inventory=/tmp/ansible-inventory8237874608161160784 > >>? ? ?>? ? ?>? ? ?--extra-vars=ovirt_query_ova_path=/ova > >>? ? ?>? ? ?>? ? ?/usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?playbook looks like > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?- hosts: all > >>? ? ?>? ? ?>? ? ?? remote_user: root > >>? ? ?>? ? ?>? ? ?? gather_facts: no > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?? roles: > >>? ? ?>? ? ?>? ? ?? ? - ovirt-ova-query > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?and it looks like it only runs query_ova.py but > on all > >>? ? ?hosts? > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> No, the engine provides ansible the host to run on > when it > >>? ? ?>? ? ?executes the > >>? ? ?>? ? ?> playbook. > >>? ? ?>? ? ?> It would only be executed on the selected host. > >>? ? ?>? ? ?> ? > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?How does this work? ...or should it work? > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> It should, especially that part of querying the OVA > and is > >>? ? ?supposed to > >>? ? ?>? ? ?> be really quick. > >>? ? ?>? ? ?> Can you please share the engine log and > >>? ? ?>? ? ?> > >>? ? ?>? ? > >>? ? > ??/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > >>? ? ? > > >>? ? ?>? ? > ? > >>? ? ? >> > >>? ? ?>? ? ?> > >>? ? ? > > >>? ? ?>? ? ? > >>? ? ? >>>.slu.cz.log ? > >>? ? ?> > >>? ? ?>? ? ?engine log is here: > >>? ? ?> > >>? ? ?>? ? ?https://pastebin.com/nWWM3UUq > >>? ? ?> > >>? ? ?> > >>? ? ?> Thanks. > >>? ? ?> Alright, so now the configuration is fetched but its > processing fails. > >>? ? ?> We fixed many issues in this area recently, but it appears that > >>? ? ?> something is wrong with the actual size of the disk within > the ovf file > >>? ? ?> that resides inside this ova file. > >>? ? ?> Can you please share that ovf file that resides > inside?/ova/HAAS-hpdio.ova? > >> > >>? ? ?file HAAS-hpdio.ova > >>? ? ?HAAS-hpdio.ova: POSIX tar archive (GNU) > >> > >>? ? ?[root at ovirt01 backup]# tar xvf HAAS-hpdio.ova > >>? ? ?HAAS-hpdio.ovf > >>? ? ?HAAS-hpdio-disk001.vmdk > >> > >>? ? ?file HAAS-hpdio.ovf is here: > >> > >>? ? ?https://pastebin.com/80qAU0wB > >> > >> > >> Thanks again. > >> So that seems to be a VM that was exported from Virtual Box, right? > >> They don't do anything that violates the OVF specification but > they do > >> some non-common things that we don't anticipate: > > > > yes, it is most likely ova from VirtualBox > > > >> First, they don't specify the actual size of the disk and the current > >> code in oVirt relies on that property. > >> There is a workaround for this though: you can extract an OVA > file, edit > >> its OVF configuration - adding ovf:populatedSize="X" (and change > >> ovf:capacity as I'll describe next) to the Disk element inside the > >> DiskSection and pack the OVA again (tar cvf > >> X is either: > >> 1. the actual size of the vmdk file + some buffer (iirc, we used > to take > >> 15% of extra space for the conversion) > >> 2. if you're using a file storage or you don't mind consuming more > >> storage space on your block storage, simply set X to the virtual > size of > >> the disk (in bytes) as indicated by the ovf:capacity filed, e.g., > >> ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova. > >> > >> Second, the virtual size (indicated by ovf:capacity) is specified in > >> bytes. The specification says that the default unit of allocation > shall > >> be bytes, but practically every OVA file that I've ever saw > specified it > >> in GB and the current code in oVirt kind of assumes that this is the > >> case without checking the ovf:capacityAllocationUnits attribute that > >> could indicate the real unit of allocation [1]. > >> Anyway, long story short, the virtual size of the disk should > currently > >> be specified in GB, e.g., ovf:populatedSize="20" in the case of > >> HAAS-hpdio.ova. > > > > wow, thanks for this excellent explanation. I have changed this in > ovf file > > > > ... > > ovf:populatedSize="20" ... > > ... > > > > then I was able to import this mofified ova file (HAAS-hpdio_new.ova). > > Interesting thing is that the vm was shown in vm list for while (with > > state down with lock and status was initializing). After while this vm > > disapeared :-o > > > > I am going to test it again and collect some logs... > > there are interesting logs in /var/log/vdsm/import/ at the host used for > import > > http://mirror.slu.cz/tmp/ovirt-import.tar.bz2 > > > first of them describes situation where I chose thick provisioning, > second situation with thin provisioning > > interesting part is I believe > > libguestfs: command: run: qemu-img > libguestfs: command: run: \ create > libguestfs: command: run: \ -f qcow2 > libguestfs: command: run: \ -o preallocation=off,compat=0.10 > libguestfs: command: run: \ > /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec > libguestfs: command: run: \ 21474836480 > Formatting > '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec', > fmt=qcow2 size=21474836480 compat=0.10 encryption=off cluster_size=65536 > preallocation=off lazy_refcounts=off refcount_bits=16 > libguestfs: trace: vdsm_disk_create: disk_create = 0 > qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2' > '/var/tmp/v2vovl2dccbd.qcow2' > '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570-f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec' > qemu-img: error while writing sector 1000960: No space left on device > > virt-v2v: error: qemu-img command failed, see earlier errors > > > > Sorry again, I made a mistake in: > ?"Anyway, long story short, the virtual size of the disk should currently > ?be specified in GB, e.g., ovf:populatedSize="20" in the case of > ?HAAS-hpdio.ova." > I should have write ovf:capacity="20". > So if you wish the actual size of the disk to be 20GB (which means the > disk is preallocated), the disk element should be set with: > ovf:populatedSize="21474836480" ... now I have this inf ovf file >, args=None) (threadPool:208) 2018-02-21 18:02:03,995+0100 INFO (tasks/1) [storage.StorageDomain] Create placeholder /rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665-9d7128260401/images/0a5c4ecb-2c04-4f96-858a-4f74915d5caa for image's volumes (sd:1244) 2018-02-21 18:02:04,016+0100 INFO (tasks/1) [storage.Volume] Creating volume bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 (volume:1151) 2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] The requested initial 21474836480 is bigger than the max size 134217728 (blockVolume:345) 2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] Failed to create volume /rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665-9d7128260401/images/0a5c4ecb-2c04-4f96-858a-4f74915d5caa/bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0: Invalid parameter: 'initial size=41943040' (volume:1175) 2018-02-21 18:02:04,061+0100 ERROR (tasks/1) [storage.Volume] Unexpected error (volume:1215) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1172, in create initialSize=initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 501, in _create size, initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 545, in calculate_volume_alloc_size preallocate, capacity, initial_size) File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", line 347, in calculate_volume_alloc_size initial_size) InvalidParameterException: Invalid parameter: 'initial size=41943040' 2018-02-21 18:02:04,062+0100 ERROR (tasks/1) [storage.TaskManager.Task] (Task='e7598aa1-420a-4612-9ee8-03012b1277d9') Unexpected error (task:875) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 882, in _run return fn(*args, **kargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336, in run return self.cmd(*self.argslist, **self.argsdict) File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", line 79, in wrapper return method(self, *args, **kwargs) File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1936, in createVolume initialSize=initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 801, in createVolume initialSize=initialSize) File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line 1217, in create (volUUID, e)) VolumeCreationError: Error creating a new volume: (u"Volume creation bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 failed: Invalid parameter: 'initial size=41943040'",) there are no new logs in import folder on host used for import... > ? > > > > > >> That should do it. If not, please share the OVA file and I will > examine > >> it in my environment. > > > > original file is at > > > > https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova > > > > >> > >> > [1]?https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/utils/src/main/java/org/ovirt/engine/core/utils/ovf/OvfOvaReader.java#L220 > > >> > >> > >> > >>? ? ?>? ? ?file > >>? ? ?>? ? > >>? ? > ??/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220123504-ovirt01.net > > >>? ? ? > > >>? ? ?>? ? > ? > >>? ? ? >> > >>? ? ?>? ? ?in the fact does not exists (nor folder > /var/log/ovirt-engine/ova/) > >>? ? ?> > >>? ? ?> > >>? ? ?> This issue is also resolved in 4.2.2. > >>? ? ?> In the meantime, please create the > ?/var/log/ovirt-engine/ova/ folder > >>? ? ?> manually and make sure its permissions match the ones of > the other > >>? ? ?> folders in ?/var/log/ovirt-engine. > >> > >>? ? ?ok, done. After another try there is this log file > >> > >>? ? > ?/var/log/ovirt-engine/ova/ovirt-query-ova-ansible-20180220173005-ovirt01.net > > >>? ? ? >.slu.cz.log > >> > >>? ? ?https://pastebin.com/M5J44qur > >> > >> > >> Is it the log of the execution of the ansible playbook that was > provided > >> with a path to the /ova folder? > >> I'm interested in that in order to see how comes that its execution > >> never completed. > > > > well, I dont think so, it is log from import with full path to ova > file > > > > > > > >> ? > >> > >> > >> > >>? ? ?>? ? ?Cheers, > >>? ? ?> > >>? ? ?>? ? ?Jiri Slezka > >>? ? ?> > >>? ? ?>? ? ?> ? > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?I am using latest 4.2.1.7-1.el7.centos version > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?Cheers, > >>? ? ?>? ? ?>? ? ?Jiri Slezka > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?[1] https://haas.cesnet.cz/#!index.md > > >>? ? ? > > >>? ? ? > >> > >>? ? ?>? ? ?>? ? ? > > > >>? ? ?>? ? ? > >>? ? ? >>> - Cesnet HAAS > >>? ? ?>? ? ?>? ? ?[2] https://haas.cesnet.cz/downloads/release-01/ > > >>? ? ? > > >>? ? ?>? ? ? > >>? ? ? >> > >>? ? ?>? ? ?>? ? ? > >>? ? ? > > >>? ? ?>? ? ? > >>? ? ? >>> - Image repository > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?>? ? ?_______________________________________________ > >>? ? ?>? ? ?>? ? ?Users mailing list > >>? ? ?>? ? ?>? ? ?Users at ovirt.org > > > > >>? ? ?>> > >>? ? ?>? ? ? > > > >>? ? ? > >>> > >>? ? ?>? ? ?>? ? ?http://lists.ovirt.org/mailman/listinfo/users > > >>? ? ? > > >>? ? ?>? ? ? > >>? ? ? >> > >>? ? ?>? ? ?>? ? ? > >>? ? ? > > >>? ? ?>? ? ? > >>? ? ? >>> > >>? ? ?>? ? ?> > >>? ? ?>? ? ?> > >>? ? ?> > >>? ? ?> > >>? ? ?> > >>? ? ?>? ? ?_______________________________________________ > >>? ? ?>? ? ?Users mailing list > >>? ? ?>? ? ?Users at ovirt.org > > > >>? ? ? > >> > >>? ? ?>? ? ?http://lists.ovirt.org/mailman/listinfo/users > > >>? ? ? > > >>? ? ?>? ? ? > >>? ? ? >> > >>? ? ?> > >>? ? ?> > >> > >> > >> > >>? ? ?_______________________________________________ > >>? ? ?Users mailing list > >>? ? ?Users at ovirt.org > > > >>? ? ?http://lists.ovirt.org/mailman/listinfo/users > > >>? ? ? > > >> > >> > > > > > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From mlipchuk at redhat.com Wed Feb 21 23:45:35 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Thu, 22 Feb 2018 01:45:35 +0200 Subject: [ovirt-users] VMs with multiple vdisks don't migrate In-Reply-To: <4663-5a869e80-5-50066700@115233288> References: <4663-5a869e80-5-50066700@115233288> Message-ID: Hi Frank, Sorry about the delay repond. I've been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning. Does this VM run with both disks on the target host without migration? Regards, Maor On Fri, Feb 16, 2018 at 11:03 AM, fsoyer wrote: > Hi Maor, > sorry for the double post, I've change the email adress of my account and > supposed that I'd need to re-post it. > And thank you for your time. Here are the logs. I added a vdisk to an > existing VM : it no more migrates, needing to poweroff it after minutes. > Then simply deleting the second disk makes migrate it in exactly 9s without > problem ! > https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 > https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d > > -- > > Cordialement, > > *Frank Soyer * > Le Mercredi, F?vrier 14, 2018 11:04 CET, Maor Lipchuk > a ?crit: > > > Hi Frank, > > I already replied on your last email. > Can you provide the VDSM logs from the time of the migration failure for > both hosts: > ginger.local.systea.f r and v > ictor.local.systea.fr > > Thanks, > Maor > > On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: >> >> Hi all, >> I discovered yesterday a problem when migrating VM with more than one >> vdisk. >> On our test servers (oVirt4.1, shared storage with Gluster), I created 2 >> VMs needed for a test, from a template with a 20G vdisk. On this VMs I >> added a 100G vdisk (for this tests I didn't want to waste time to extend >> the existing vdisks... But I lost time finally...). The VMs with the 2 >> vdisks works well. >> Now I saw some updates waiting on the host. I tried to put it in >> maintenance... But it stopped on the two VM. They were marked "migrating", >> but no more accessible. Other (small) VMs with only 1 vdisk was migrated >> without problem at the same time. >> I saw that a kvm process for the (big) VMs was launched on the source AND >> destination host, but after tens of minutes, the migration and the VMs was >> always freezed. I tried to cancel the migration for the VMs : failed. The >> only way to stop it was to poweroff the VMs : the kvm process died on the 2 >> hosts and the GUI alerted on a failed migration. >> In doubt, I tried to delete the second vdisk on one of this VMs : it >> migrates then without error ! And no access problem. >> I tried to extend the first vdisk of the second VM, the delete the second >> vdisk : it migrates now without problem ! >> >> So after another test with a VM with 2 vdisks, I can say that this >> blocked the migration process :( >> >> In engine.log, for a VMs with 1 vdisk migrating well, we see : >> >> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired >> to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >> sharedLocks=''}' >> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >> Running command: MigrateVmToServerCommand internal: false. Entities >> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction >> group MIGRATE_VM with role type USER >> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >> maxIncomingMigrations='2', maxOutgoingMigrations='2', >> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >> action={name=setDowntime, params=[200]}}, {limit=3, >> action={name=setDowntime, params=[300]}}, {limit=4, >> action={name=setDowntime, params=[400]}}, {limit=6, >> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >> params=[]}}]]'}), log id: 14f61ee0 >> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >> [2f712024-5982-46a8-82c8-fd8293da5725] START, >> MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, >> MigrateVDSCommandParameters:{runAsync='true', >> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >> maxIncomingMigrations='2', maxOutgoingMigrations='2', >> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >> action={name=setDowntime, params=[200]}}, {limit=3, >> action={name=setDowntime, params=[300]}}, {limit=4, >> action={name=setDowntime, params=[400]}}, {limit=6, >> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >> params=[]}}]]'}), log id: 775cd381 >> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, >> log id: 775cd381 >> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 >> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db >> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) >> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), >> Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: >> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, >> Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, >> Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, >> User: admin at internal-authz). >> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >> START, FullListVDSCommand(HostName = victor.local.systea.fr, >> FullListVDSCommandParameters:{runAsync='true', >> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 >> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >> emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, >> guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >> timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, >> guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, >> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >> controller=0, type=virtio-serial, port=1}', managed='false', >> plugged='true', readOnly='false', deviceAlias='channel0', >> customProperties='[]', snapshotId='null', logicalName='null', >> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254dev >> ice_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4 >> -4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId=' >> 017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >> readOnly='false', deviceAlias='input0', customProperties='[]', >> snapshotId='null', logicalName='null', hostDevice='null'}, >> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id=' >> VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >> snapshotId='null', logicalName='null', hostDevice='null'}, >> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab- >> 4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- >> a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >> controller=0, type=virtio-serial, port=2}', managed='false', >> plugged='true', readOnly='false', deviceAlias='channel1', >> customProperties='[]', snapshotId='null', logicalName='null', >> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >> vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, >> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >> numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, >> kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, >> devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, >> clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 >> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro >> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) >> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' >> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] >> Received a vnc Device without an address when processing VM >> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >> displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >> port=5901} >> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] >> Received a lease Device without an address when processing VM >> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >> was unexpectedly detected as 'MigratingTo' on VDS >> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected >> on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') >> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( >> ginger.local.systea.fr) ignoring it in the refresh until migration is >> done >> .... >> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >> victor.local.systea.fr) >> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, >> DestroyVDSCommand(HostName = victor.local.systea.fr, >> DestroyVmVDSCommandParameters:{runAsync='true', >> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', >> secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log >> id: 560eca57 >> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, >> DestroyVDSCommand, log id: 560eca57 >> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >> moved from 'MigratingFrom' --> 'Down' >> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status >> 'MigratingTo' >> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >> moved from 'MigratingTo' --> 'Up' >> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >> START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, >> MigrateStatusVDSCommandParameters:{runAsync='true', >> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 >> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >> FINISH, MigrateStatusVDSCommand, log id: 7a25c281 >> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db >> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] >> EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: >> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: >> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, >> Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, >> Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, >> Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) >> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >> (ForkJoinPool-1-worker-4) [] Lock freed to object >> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >> sharedLocks=''}' >> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, >> FullListVDSCommand(HostName = ginger.local.systea.fr, >> FullListVDSCommandParameters:{runAsync='true', >> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 >> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, >> FullListVDSCommand, return: [{acpiEnable=true, >> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >> tabletEnable=true, pid=18748, guestDiskMapping={}, >> transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, >> guestNumaNodes=[Ljava.lang.Object;@760085fd, >> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >> controller=0, type=virtio-serial, port=1}', managed='false', >> plugged='true', readOnly='false', deviceAlias='channel0', >> customProperties='[]', snapshotId='null', logicalName='null', >> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254dev >> ice_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4 >> -4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId=' >> 017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >> readOnly='false', deviceAlias='input0', customProperties='[]', >> snapshotId='null', logicalName='null', hostDevice='null'}, >> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id=' >> VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >> snapshotId='null', logicalName='null', hostDevice='null'}, >> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab- >> 4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- >> a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >> controller=0, type=virtio-serial, port=2}', managed='false', >> plugged='true', readOnly='false', deviceAlias='channel1', >> customProperties='[]', snapshotId='null', logicalName='null', >> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, >> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 >> <(430)%20425-9600>, display=vnc}], log id: 7cc65298 >> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >> Received a vnc Device without an address when processing VM >> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >> port=5901} >> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >> Received a lease Device without an address when processing VM >> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] >> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >> tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H >> ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >> timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj >> ect;@77951faf, custom={device_fbddd528-7d93-4 >> 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc >> c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >> controller=0, type=virtio-serial, port=1}', managed='false', >> plugged='true', readOnly='false', deviceAlias='channel0', >> customProperties='[]', snapshotId='null', logicalName='null', >> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254dev >> ice_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4 >> -4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId=' >> 017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >> readOnly='false', deviceAlias='input0', customProperties='[]', >> snapshotId='null', logicalName='null', hostDevice='null'}, >> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id=' >> VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >> snapshotId='null', logicalName='null', hostDevice='null'}, >> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab- >> 4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485- >> a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >> controller=0, type=virtio-serial, port=2}', managed='false', >> plugged='true', readOnly='false', deviceAlias='channel1', >> customProperties='[]', snapshotId='null', logicalName='null', >> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, >> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 >> <(430)%20426-3620>, display=vnc}], log id: 58cdef4c >> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro >> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] >> Received a vnc Device without an address when processing VM >> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >> port=5901} >> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro >> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] >> Received a lease Device without an address when processing VM >> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >> >> >> >> >> For the VM with 2 vdisks we see : >> >> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired >> to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', >> sharedLocks=''}' >> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >> Running command: MigrateVmToServerCommand internal: false. Entities >> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction >> group MIGRATE_VM with role type USER >> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >> maxIncomingMigrations='2', maxOutgoingMigrations='2', >> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >> action={name=setDowntime, params=[200]}}, {limit=3, >> action={name=setDowntime, params=[300]}}, {limit=4, >> action={name=setDowntime, params=[400]}}, {limit=6, >> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >> params=[]}}]]'}), log id: 3702a9e0 >> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, >> MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, >> MigrateVDSCommandParameters:{runAsync='true', >> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >> maxIncomingMigrations='2', maxOutgoingMigrations='2', >> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >> action={name=setDowntime, params=[200]}}, {limit=3, >> action={name=setDowntime, params=[300]}}, {limit=4, >> action={name=setDowntime, params=[400]}}, {limit=6, >> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >> params=[]}}]]'}), log id: 1840069c >> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro >> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, >> log id: 1840069c >> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 >> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db >> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) >> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), >> Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: >> f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, >> Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, >> Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, >> User: admin at internal-authz). >> ... >> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro >> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) >> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' >> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >> was unexpectedly detected as 'MigratingTo' on VDS >> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected >> on 'd569c2dd-8f30-4878-8aea-858db285cf69') >> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >> victor.local.systea.fr) ignoring it in the refresh until migration is >> done >> ... >> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >> was unexpectedly detected as 'MigratingTo' on VDS >> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected >> on 'd569c2dd-8f30-4878-8aea-858db285cf69') >> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >> victor.local.systea.fr) ignoring it in the refresh until migration is >> done >> >> >> >> and so on, last lines repeated indefinitly for hours since we poweroff >> the VM... >> Is this something known ? Any idea about that ? >> >> Thanks >> >> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. >> >> -- >> >> Cordialement, >> >> *Frank Soyer * >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eshenitz at redhat.com Thu Feb 22 05:11:39 2018 From: eshenitz at redhat.com (Eyal Shenitzky) Date: Thu, 22 Feb 2018 07:11:39 +0200 Subject: [ovirt-users] Unable to remove storage domain's In-Reply-To: References: Message-ID: So here is the Query: BEGIN -- Creating a temporary table which will give all the images and the disks which resids on only the specified storage domain. (copied template disks on multiple storage domains will not be part of this table) CREATE TEMPORARY TABLE STORAGE_DOMAIN_MAP_TABLE AS SELECT image_guid AS image_id, disk_id FROM memory_and_disk_images_storage_domain_view WHERE storage_id = v_storage_domain_id EXCEPT SELECT image_guid AS image_id, disk_id FROM memory_and_disk_images_storage_domain_view WHERE storage_id != v_storage_domain_id; exception when others then TRUNCATE TABLE STORAGE_DOMAIN_MAP_TABLE; INSERT INTO STORAGE_DOMAIN_MAP_TABLE SELECT image_guid AS image_id, disk_id FROM memory_and_disk_images_storage_domain_view WHERE storage_id = v_storage_domain_id EXCEPT SELECT image_guid AS image_id, disk_id FROM memory_and_disk_images_storage_domain_view WHERE storage_id != v_storage_domain_id; END; Try to run it and share the results please. On Wed, Feb 21, 2018 at 4:01 PM, Eyal Shenitzky wrote: > Note that destroy and remove are two different operations. > > Did you try both? > > On Wed, Feb 21, 2018 at 3:17 PM, Ladislav Humenik < > ladislav.humenik at 1und1.de> wrote: > >> Hi, of course i did. I put these domain's first in to maintenance, then >> Detached it from the datacenter. >> >> The last step is destroy or remove "just name it" and this last step is >> mysteriously not working. >> >> >> and throwing sql exception which I attached before. >> >> Thank you in advance >> ladislav >> >> On 21.02.2018 14:03, Eyal Shenitzky wrote: >> >> Did you manage to set the domain to maintenance? >> >> If so you can try to 'Destroy' the domain. >> >> On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik < >> ladislav.humenik at 1und1.de> wrote: >> >>> Hi, no >>> >>> >>> this table "STORAGE_DOMAIN_MAP_TABLE" is not present at any of our >>> ovirt's and >>> >>> based on link >>> >>> this is just a temporary table. Can you point me to what query should I >>> test? >>> >>> thank you in advance >>> >>> Ladislav >>> >>> On 21.02.2018 12:50, Eyal Shenitzky wrote: >>> >>> According to the logs, it seems like you somehow missing a table in the >>> DB - >>> >>> STORAGE_DOMAIN_MAP_TABLE. >>> >>> 4211-b98f-a37604642251] Command 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' failed: CallableStatementCallback; bad SQL grammar [{call force_delete_storage_domain(?)}]; nested exception is org.postgresql.util.PSQLException: ERROR: relation "storage_domain_map_table" does not exist >>> >>> Did you tryied to run some SQL query which cause that issue? >>> >>> >>> >>> >>> On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik < >>> ladislav.humenik at 1und1.de> wrote: >>> >>>> Hello, >>>> >>>> we can not remove old NFS-data storage domains, this 4 are already >>>> deactivated and unattached: >>>> >>>> engine=> select id,storage_name from storage_domains where storage_name >>>> like 'bs09%'; >>>> id | storage_name >>>> --------------------------------------+--------------- >>>> 819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm >>>> 9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm >>>> f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm >>>> a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm >>>> (4 rows) >>>> >>>> >>>> The only images which still resides in DB are OVF_STORE templates: >>>> >>>> engine=> select image_guid,storage_name,disk_description from >>>> images_storage_domain_view where storage_name like 'bs09%'; >>>> image_guid | storage_name | >>>> disk_description >>>> --------------------------------------+---------------+----- >>>> ------------- >>>> 6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | bs09bF2C9kvm | OVF_STORE >>>> 997fe5a6-9647-4d42-b074-27767984b7d2 | bs09bF2C9kvm | OVF_STORE >>>> 2b1884cb-eb37-475f-9c24-9638400f15af | bs09aF2C10kvm | OVF_STORE >>>> 85383ffe-68ba-4a82-a692-d93e38bf7f4c | bs09aF2C9kvm | OVF_STORE >>>> bca14796-aed1-4747-87c9-1b25861fad86 | bs09aF2C9kvm | OVF_STORE >>>> 797c27bf-7c2d-4363-96f9-565fa58d0a5e | bs09bF2C10kvm | OVF_STORE >>>> 5d092a1b-597c-48a3-8058-cbe40d39c2c9 | bs09bF2C10kvm | OVF_STORE >>>> dc61f42f-1330-4bfb-986a-d868c736da59 | bs09aF2C10kvm | OVF_STORE >>>> (8 rows) >>>> >>>> >>>> >>>> Current oVirt Engine version: 4.1.8.2-1.el7.centos >>>> Exception logs from engine are in attachment >>>> >>>> Do you have any magic sql statement to figure out what is causing this >>>> exception and how we can remove those storage domains without disruption ? >>>> >>>> Thank you in advance >>>> >>>> -- >>>> Ladislav Humenik >>>> >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>>> >>> >>> >>> -- >>> Regards, >>> Eyal Shenitzky >>> >>> >>> -- >>> Ladislav Humenik >>> >>> System administrator / VI >>> IT Operations Hosting Infrastructure >>> >>> 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany >>> Phone: +49 721 91374-8361 <+49%20721%20913748361> >>> E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de >>> >>> Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 >>> >>> Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg >>> Aufsichtsratsvorsitzender: Ren? Obermann >>> >>> >>> Member of United Internet >>> >>> Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. >>> >>> This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. >>> >>> >> >> >> -- >> Regards, >> Eyal Shenitzky >> >> >> -- >> Ladislav Humenik >> >> System administrator / VI >> IT Operations Hosting Infrastructure >> >> 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany >> Phone: +49 721 91374-8361 <+49%20721%20913748361> >> E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de >> >> Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 >> >> Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg >> Aufsichtsratsvorsitzender: Ren? Obermann >> >> >> Member of United Internet >> >> Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. >> >> This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. >> >> > > > -- > Regards, > Eyal Shenitzky > -- Regards, Eyal Shenitzky -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpnbnncpehjpckhg.png Type: image/png Size: 10782 bytes Desc: not available URL: From ladislav.humenik at 1und1.de Thu Feb 22 07:50:14 2018 From: ladislav.humenik at 1und1.de (Ladislav Humenik) Date: Thu, 22 Feb 2018 08:50:14 +0100 Subject: [ovirt-users] Unable to remove storage domain's In-Reply-To: References: Message-ID: <6dc0eead-08f4-f8b5-5ffa-41932d4227eb@1und1.de> Hello again, the result is: ERROR:? permission denied to create temporary tables in database "engine" - I forgot to mention we do not run the DB on localhost, but on dedicated server which is managed by DB-admins. After granting the necessary TEMPORARY privileges: engine-log: 2018-02-22 08:47:57,678+01 INFO [org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default task-13) [6f250dbf-40d2-4017-861a-ae410fc382f5] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f5efd264-045b-48d5-b35c-661a30461de5=STORAGE]', sharedLocks=''}' 2018-02-22 08:47:57,694+01 INFO [org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default task-13) [6f250dbf-40d2-4017-861a-ae410fc382f5] Running command: RemoveStorageDomainCommand internal: false. Entities affected :? ID: f5efd264-045b-48d5-b35c-661a30461de5 Type: StorageAction group DELETE_STORAGE_DOMAIN with role type ADMIN 2018-02-22 08:47:57,877+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-13) [6f250dbf-40d2-4017-861a-ae410fc382f5] EVENT_ID: USER_REMOVE_STORAGE_DOMAIN(960), Correlation ID: 6f250dbf-40d2-4017-861a-ae410fc382f5, Job ID: d825643c-3f2e-449c-a19d-dc55af74d153, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Storage Domain bs09aF2C9kvm was removed by admin at internal 2018-02-22 08:47:57,881+01 INFO [org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand] (default task-13) [6f250dbf-40d2-4017-861a-ae410fc382f5] Lock freed to object 'EngineLock:{exclusiveLocks='[f5efd264-045b-48d5-b35c-661a30461de5=STORAGE]', sharedLocks=''}' Thank you for your help, Ladislav On 22.02.2018 06:11, Eyal Shenitzky wrote: > So here is the Query: > > BEGIN -- Creating a temporary table which will give all the images and > the disks which resids on only the specified storage domain. (copied > template disks on multiple storage domains will not be part of this > table) CREATE TEMPORARY TABLE STORAGE_DOMAIN_MAP_TABLEAS SELECT image_guidAS image_id, > disk_id > FROM memory_and_disk_images_storage_domain_view > WHERE storage_id = v_storage_domain_id > > EXCEPT SELECT image_guidAS image_id, > disk_id > FROM memory_and_disk_images_storage_domain_view > WHERE storage_id != v_storage_domain_id; > > exception when othersthen TRUNCATETABLE STORAGE_DOMAIN_MAP_TABLE; > > INSERT INTO STORAGE_DOMAIN_MAP_TABLE > SELECT image_guidAS image_id, > disk_id > FROM memory_and_disk_images_storage_domain_view > WHERE storage_id = v_storage_domain_id > > EXCEPT SELECT image_guidAS image_id, > disk_id > FROM memory_and_disk_images_storage_domain_view > WHERE storage_id != v_storage_domain_id; > END; > Try to run it and share the results please. > > On Wed, Feb 21, 2018 at 4:01 PM, Eyal Shenitzky > wrote: > > Note that destroy and remove are two different operations. > > Did you try?both? > > On Wed, Feb 21, 2018 at 3:17 PM, Ladislav Humenik > > wrote: > > Hi, of course i did. I put these domain's first in to > maintenance, then Detached it from the datacenter. > > The last step is destroy or remove "just name it" and this > last step is mysteriously not working. > > > > and throwing sql exception which I attached before. > > Thank you in advance > ladislav > > On 21.02.2018 14:03, Eyal Shenitzky wrote: >> Did you manage to set the domain to maintenance? >> >> If so you can try to 'Destroy' the domain. >> >> On Wed, Feb 21, 2018 at 2:57 PM, Ladislav Humenik >> > > wrote: >> >> Hi, no >> >> >> this table "STORAGE_DOMAIN_MAP_TABLE" is not present at >> any of our ovirt's and >> >> based on link >> >> this is just a temporary table. Can you point me to what >> query should I test? >> >> thank you in advance >> >> Ladislav >> >> >> On 21.02.2018 12:50, Eyal Shenitzky wrote: >>> According to the logs, it seems like you somehow missing >>> a table in the DB - >>> STORAGE_DOMAIN_MAP_TABLE. >>> 4211-b98f-a37604642251] Command >>> 'org.ovirt.engine.core.bll.storage.domain.RemoveStorageDomainCommand' >>> failed: CallableStatementCallback; bad SQL grammar >>> [{call force_delete_storage_domain(?)}]; nested >>> exception is org.postgresql.util.PSQLException: ERROR: >>> relation "storage_domain_map_table" does not exist >>> Did you tryied to run some SQL query which cause that issue? >>> >>> >>> >>> On Wed, Feb 21, 2018 at 11:48 AM, Ladislav Humenik >>> >> > wrote: >>> >>> Hello, >>> >>> we can not remove old NFS-data storage domains, this >>> 4 are already deactivated and unattached: >>> >>> engine=> select id,storage_name from storage_domains >>> where storage_name like 'bs09%'; >>> id????????????????? | storage_name >>> --------------------------------------+--------------- >>> ?819b419e-638b-43c7-9189-b93c0314d38a | bs09aF2C10kvm >>> ?9a403356-f58a-4e80-9435-026e6f853a9b | bs09bF2C10kvm >>> ?f5efd264-045b-48d5-b35c-661a30461de5 | bs09aF2C9kvm >>> ?a0989c64-fc41-4a8b-8544-914137d7eae8 | bs09bF2C9kvm >>> (4 rows) >>> >>> >>> The only images which still resides in DB are >>> OVF_STORE templates: >>> >>> engine=> select >>> image_guid,storage_name,disk_description from >>> images_storage_domain_view where storage_name like >>> 'bs09%'; >>> image_guid????????????? | storage_name? | >>> disk_description >>> --------------------------------------+---------------+------------------ >>> ?6b72139d-a4b3-4e22-98e2-e8b1d64e8e50 | >>> bs09bF2C9kvm? | OVF_STORE >>> ?997fe5a6-9647-4d42-b074-27767984b7d2 | >>> bs09bF2C9kvm? | OVF_STORE >>> ?2b1884cb-eb37-475f-9c24-9638400f15af | >>> bs09aF2C10kvm | OVF_STORE >>> ?85383ffe-68ba-4a82-a692-d93e38bf7f4c | >>> bs09aF2C9kvm? | OVF_STORE >>> ?bca14796-aed1-4747-87c9-1b25861fad86 | >>> bs09aF2C9kvm? | OVF_STORE >>> ?797c27bf-7c2d-4363-96f9-565fa58d0a5e | >>> bs09bF2C10kvm | OVF_STORE >>> ?5d092a1b-597c-48a3-8058-cbe40d39c2c9 | >>> bs09bF2C10kvm | OVF_STORE >>> ?dc61f42f-1330-4bfb-986a-d868c736da59 | >>> bs09aF2C10kvm | OVF_STORE >>> (8 rows) >>> >>> >>> >>> Current oVirt Engine version: 4.1.8.2-1.el7.centos >>> Exception logs from engine are in attachment >>> >>> Do you have any magic sql statement to figure out >>> what is causing this exception and how we can remove >>> those storage domains without disruption ? >>> >>> Thank you in advance >>> >>> -- >>> Ladislav Humenik >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >>> >>> >>> >>> -- >>> Regards, >>> Eyal Shenitzky >> >> -- >> Ladislav Humenik >> >> System administrator / VI >> IT Operations Hosting Infrastructure >> >> 1&1 Internet SE |Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany >> >> Phone:+49 721 91374-8361 >> E-Mail:ladislav.humenik at 1und1.de | Web:www.1und1.de >> >> Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 >> >> Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg >> Aufsichtsratsvorsitzender: Ren? Obermann >> >> >> Member of United Internet >> >> Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. >> >> This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. >> >> >> >> >> -- >> Regards, >> Eyal Shenitzky > > -- > Ladislav Humenik > > System administrator / VI > IT Operations Hosting Infrastructure > > 1&1 Internet SE |Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany > > Phone:+49 721 91374-8361 > E-Mail:ladislav.humenik at 1und1.de | Web:www.1und1.de > > Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 > > Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg > Aufsichtsratsvorsitzender: Ren? Obermann > > > Member of United Internet > > Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. > > This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. > > > > > -- > Regards, > Eyal Shenitzky > > > > > -- > Regards, > Eyal Shenitzky -- Ladislav Humenik System administrator / VI IT Operations Hosting Infrastructure 1&1 Internet SE | Ernst-Frey-Str. 5 | 76135 Karlsruhe | Germany Phone: +49 721 91374-8361 E-Mail: ladislav.humenik at 1und1.de | Web: www.1und1.de Hauptsitz Montabaur, Amtsgericht Montabaur, HRB 24498 Vorstand: Robert Hoffmann, Hans-Henning Kettler, Uwe Lamnek, Matthias Steinberg Aufsichtsratsvorsitzender: Ren? Obermann Member of United Internet Diese E-Mail kann vertrauliche und/oder gesetzlich gesch?tzte Informationen enthalten. Wenn Sie nicht der bestimmungsgem??e Adressat sind oder diese E-Mail irrt?mlich erhalten haben, unterrichten Sie bitte den Absender und vernichten Sie diese E-Mail. Anderen als dem bestimmungsgem??en Adressaten ist untersagt, diese E-Mail zu speichern, weiterzuleiten oder ihren Inhalt auf welche Weise auch immer zu verwenden. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient of this e-mail, you are hereby notified that saving, distribution or use of the content of this e-mail in any way is prohibited. If you have received this e-mail in error, please notify the sender and delete the e-mail. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: gpnbnncpehjpckhg.png Type: image/png Size: 10782 bytes Desc: not available URL: From shuriku at shurik.kiev.ua Thu Feb 22 08:40:58 2018 From: shuriku at shurik.kiev.ua (Alexandr Krivulya) Date: Thu, 22 Feb 2018 10:40:58 +0200 Subject: [ovirt-users] oVirt 4.2: hostdev passthrough not working any more In-Reply-To: References: Message-ID: <3b229ed8-23c9-010e-0d2f-41bf66653933@shurik.kiev.ua> Hello, the same problem after upgrade to 4.2.1 :( 18.01.2018 11:53, Daniel Helgenberger ?????: > Hello, > > yesterday I upgraded to 4.2.0 from 4.1.8. > > Now I notice I cannot assign host dev pass though any more; in the GUI > the 'Pinnded to host' list is empty; I cannot select any host for pass > through host pinning. > > When I was creating the particular VM in 4.1 it was working as expected. > > The hostdev from before the upgrades are still present. I tried to > remove them and got an NPE (see below). > > As a workaround, is the old hostusb[1] method I know back from 3.x still working in the 4.x > line? > > AFAICT IOMMU is working > >> dmesg | grep -e DMAR -e IOMMU >> [ 0.000000] ACPI: DMAR 000000007b7e7000 002C6 (v01 HP ProLiant 00000001 HP 00000001) >> [ 0.168032] DMAR: Host address width 46 >> [ 0.168034] DMAR: DRHD base: 0x000000fbffc000 flags: 0x0 >> [ 0.168047] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de >> [ 0.168050] DMAR: DRHD base: 0x000000c7ffc000 flags: 0x1 >> [ 0.168061] DMAR: dmar1: reg_base_addr c7ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de >> [ 0.168063] DMAR: RMRR base: 0x00000079174000 end: 0x00000079176fff >> [ 0.168065] DMAR: RMRR base: 0x000000791f4000 end: 0x000000791f7fff >> [ 0.168067] DMAR: RMRR base: 0x000000791de000 end: 0x000000791f3fff >> [ 0.168070] DMAR: RMRR base: 0x000000791cb000 end: 0x000000791dbfff >> [ 0.168071] DMAR: RMRR base: 0x000000791dc000 end: 0x000000791ddfff >> [ 0.168073] DMAR: ATSR flags: 0x0 >> [ 0.168075] DMAR: ATSR flags: 0x0 >> [ 0.168079] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbffc000 IOMMU 0 >> [ 0.168082] DMAR-IR: IOAPIC id 8 under DRHD base 0xc7ffc000 IOMMU 1 >> [ 0.168084] DMAR-IR: IOAPIC id 9 under DRHD base 0xc7ffc000 IOMMU 1 >> [ 0.168086] DMAR-IR: HPET id 0 under DRHD base 0xc7ffc000 >> [ 0.168088] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. >> [ 0.169073] DMAR-IR: Enabled IRQ remapping in x2apic mode > Thanks, > > [1] https://www.ovirt.org/develop/release-management/features/virt/hostusb/ From mpolednik at redhat.com Thu Feb 22 09:30:09 2018 From: mpolednik at redhat.com (Martin Polednik) Date: Thu, 22 Feb 2018 10:30:09 +0100 Subject: [ovirt-users] oVirt 4.2: hostdev passthrough not working any more In-Reply-To: <3b229ed8-23c9-010e-0d2f-41bf66653933@shurik.kiev.ua> References: <3b229ed8-23c9-010e-0d2f-41bf66653933@shurik.kiev.ua> Message-ID: <20180222093008.GA8924@Alexandra.local> On 22/02/18 10:40 +0200, Alexandr Krivulya wrote: >Hello, the same problem after upgrade to 4.2.1 :( > > >18.01.2018 11:53, Daniel Helgenberger ?????: >>Hello, >> >>yesterday I upgraded to 4.2.0 from 4.1.8. >> >>Now I notice I cannot assign host dev pass though any more; in the GUI >>the 'Pinnded to host' list is empty; I cannot select any host for pass >>through host pinning. Does the host that previously worked report device passthrough capability? In the UI it's the "Device Passthrough: Enabled" field. (and similarly named field in vdsm-client getCapabilities call) >>When I was creating the particular VM in 4.1 it was working as expected. >> >>The hostdev from before the upgrades are still present. I tried to >>remove them and got an NPE (see below). >> >>As a workaround, is the old hostusb[1] method I know back from 3.x still working in the 4.x >>line? >> >>AFAICT IOMMU is working >> >>>dmesg | grep -e DMAR -e IOMMU >>>[ 0.000000] ACPI: DMAR 000000007b7e7000 002C6 (v01 HP ProLiant 00000001 HP 00000001) >>>[ 0.168032] DMAR: Host address width 46 >>>[ 0.168034] DMAR: DRHD base: 0x000000fbffc000 flags: 0x0 >>>[ 0.168047] DMAR: dmar0: reg_base_addr fbffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de >>>[ 0.168050] DMAR: DRHD base: 0x000000c7ffc000 flags: 0x1 >>>[ 0.168061] DMAR: dmar1: reg_base_addr c7ffc000 ver 1:0 cap 8d2078c106f0466 ecap f020de >>>[ 0.168063] DMAR: RMRR base: 0x00000079174000 end: 0x00000079176fff >>>[ 0.168065] DMAR: RMRR base: 0x000000791f4000 end: 0x000000791f7fff >>>[ 0.168067] DMAR: RMRR base: 0x000000791de000 end: 0x000000791f3fff >>>[ 0.168070] DMAR: RMRR base: 0x000000791cb000 end: 0x000000791dbfff >>>[ 0.168071] DMAR: RMRR base: 0x000000791dc000 end: 0x000000791ddfff >>>[ 0.168073] DMAR: ATSR flags: 0x0 >>>[ 0.168075] DMAR: ATSR flags: 0x0 >>>[ 0.168079] DMAR-IR: IOAPIC id 10 under DRHD base 0xfbffc000 IOMMU 0 >>>[ 0.168082] DMAR-IR: IOAPIC id 8 under DRHD base 0xc7ffc000 IOMMU 1 >>>[ 0.168084] DMAR-IR: IOAPIC id 9 under DRHD base 0xc7ffc000 IOMMU 1 >>>[ 0.168086] DMAR-IR: HPET id 0 under DRHD base 0xc7ffc000 >>>[ 0.168088] DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. >>>[ 0.169073] DMAR-IR: Enabled IRQ remapping in x2apic mode >>Thanks, >> >>[1] https://www.ovirt.org/develop/release-management/features/virt/hostusb/ > >_______________________________________________ >Users mailing list >Users at ovirt.org >http://lists.ovirt.org/mailman/listinfo/users From mkalfon at redhat.com Thu Feb 22 07:52:16 2018 From: mkalfon at redhat.com (Mor Kalfon) Date: Thu, 22 Feb 2018 09:52:16 +0200 Subject: [ovirt-users] Manageiq ovn In-Reply-To: References: <20180216172212.464828c3@t460p> <20180219120531.277ebcec@t460p> <20180219123734.174899aa@t460p> Message-ID: On Wed, Feb 21, 2018 at 6:17 PM, Alona Kaplan wrote: > Hi Alexy. > > First of all, please reply to users at ovirt.org list, so all our users can > enjoy the discussion. > > To summarize, currently you have two questions. > > 1. How to automatically trigger the provider refresh after doing changes > to the provider? > > There is an open RFE regrading it - https://bugzilla.redhat.com/1547415, > you can add yourself to its CC list to track it. > > 2. Adding a router with external gateway is not working since an ip > address is expected in external_fixed_ips by the ovn provider but manageiq > doesn't provide it. > Looking at the neutron api (https://developer.openstack. > org/api-ref/network/v2/#create-router) seems the ip address is mandatory. > So it is a manageiq bug (also tried to add a router with an external > gateway with no ip address directly to neutron and got an error). > > As a workaround to the bug, you can add the router to the ovn-provider > directly using the api - https://gist.github.com/dominikholler/ > f58658407ae7620280f4cb47c398d849 > > Mor, can you please open a bug regarding the issue? > > ?Sure, I opened two bugs: https://bugzilla.redhat.com/show_bug.cgi?id=1547878 https://bugzilla.redhat.com/show_bug.cgi?id=1547872 On Tue, Feb 20, 2018 at 12:32 PM, Aliaksei Nazarenka < > aliaksei.nazarenka at gmail.com> wrote: > >> Hi, Alona! >> Can you help ve with add external ip for creating router procedure? >> >> 2018-02-19 14:52 GMT+03:00 Aliaksei Nazarenka < >> aliaksei.nazarenka at gmail.com>: >> >>> I do not really understand the essence of how this will work, you >>> specify the router 10.0.0.2, while on dhcp will be distributed ip gateway >>> 10.0.0.1? It seems to me that in the role of geystwey just had to act as a >>> router, or am I wrong? >>> >>> 2018-02-19 14:46 GMT+03:00 Aliaksei Nazarenka < >>> aliaksei.nazarenka at gmail.com>: >>> >>>> Dominik sent me this link here - https://gist.github.com/domini >>>> kholler/f58658407ae7620280f4cb47c398d849 >>>> >>>> 2018-02-19 14:45 GMT+03:00 Aliaksei Nazarenka < >>>> aliaksei.nazarenka at gmail.com>: >>>> >>>>> Hi, Alona! >>>>> Dominik said that you can help. I need to create an external gateway >>>>> in manageiq, I did not find a native way to do this. As a result of lack of >>>>> ip address, I can not create a router. Here are the logs: >>>>> >>>>> 2018-02-19 14:22:16,942 root Starting server >>>>> 2018-02-19 14:22:16,943 root Version: 1.2.5-1 >>>>> 2018-02-19 14:22:16,943 root Build date: 20180117090014 >>>>> 2018-02-19 14:22:16,944 root Githash: 12b705d >>>>> 2018-02-19 14:23:17,250 root From: 10.0.184.20:57674 Request: POST >>>>> /v2.0/tokens >>>>> 2018-02-19 14:23:17,252 root Request body: >>>>> {"auth": {"tenantName": "tenant", "passwordCredentials": {"username": >>>>> "admin at internal", "password": ""}}} >>>>> 2018-02-19 14:23:17,322 requests.packages.urllib3.connectionpool >>>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> 2018-02-19 14:23:17,322 requests.packages.urllib3.connectionpool >>>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> 2018-02-19 14:23:17,836 requests.packages.urllib3.connectionpool >>>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> 2018-02-19 14:23:17,836 requests.packages.urllib3.connectionpool >>>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> 2018-02-19 14:23:17,870 root Response code: 200 >>>>> 2018-02-19 14:23:17,870 root Response body: {"access": {"token": >>>>> {"expires": "2018-02-23T15:23:17Z", "id": "lmceG8s1SROKskN-H4T93IPwgwSFb >>>>> 8mN7UJ6qz4HObrC2PqNWcSyS_dZGIcax6dEBVIz8H6ShgDXl_2fvflbeg"}, >>>>> "serviceCatalog": [{"endpoints_links": [], "endpoints": [{"adminURL": " >>>>> https://lbn-r-engine-01.mp.local:9696/", "region": "RegionOne", "id": >>>>> "00000000000000000000000000000001", "internalURL": " >>>>> https://lbn-r-engine-01.mp.local:9696/", "publicURL": " >>>>> https://lbn-r-engine-01.mp.local:9696/"}], "type": "network", "name": >>>>> "neutron"}, {"endpoints_links": [], "endpoints": [{"adminURL": " >>>>> https://lbn-r-engine-01.mp.local:35357/", "region": "RegionOne", >>>>> "publicURL": "https://lbn-r-engine-01.mp.local:35357/", >>>>> "internalURL": "https://lbn-r-engine-01.mp.local:35357/", "id": >>>>> "00000000000000000000000000000002"}], "type": "identity", "name": >>>>> "keystone"}, {"endpoints_links": [], "endpoints": [{"adminURL": " >>>>> https://lbn-r-engine-01.mp.local:8774/v2.1/", "region": "RegionOne", >>>>> "publicURL": "https://lbn-r-engine-01.mp.local:8774/v2.1/", >>>>> "internalURL": "https://lbn-r-engine-01.mp.local:8774/v2.1/", "id": >>>>> "00000000000000000000000000000002"}], "type": "compute", "name": >>>>> "nova"}], "user": {"username": "admin", "roles_links": [], "id": "", >>>>> "roles": [{"name": "admin"}], "name": "admin"}}} >>>>> 2018-02-19 14:23:17,974 root From: 10.0.184.20:43600 Request: POST >>>>> /v2.0/routers >>>>> 2018-02-19 14:23:17,974 root Request body: >>>>> {"router":{"name":"test_router","external_gateway_info":{"ne >>>>> twork_id":"17c31685-56ef-428a-94dd-3202bf407d36","external_f >>>>> ixed_ips":[{"subnet_id":"c425f071-4b4e-4598-8c56-d5457a59dac >>>>> 3"}],"enable_snat":0}}} >>>>> 2018-02-19 14:23:17,980 requests.packages.urllib3.connectionpool >>>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> 2018-02-19 14:23:17,980 requests.packages.urllib3.connectionpool >>>>> Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>> 2018-02-19 14:23:18,391 root ip_address missing in the external >>>>> gateway information. >>>>> Traceback (most recent call last): >>>>> File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line >>>>> 131, in _handle_request >>>>> method, path_parts, content) >>>>> File "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", >>>>> line 175, in handle_request >>>>> return self.call_response_handler(handler, content, parameters) >>>>> File "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, >>>>> in call_response_handler >>>>> return response_handler(ovn_north, content, parameters) >>>>> File "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", >>>>> line 205, in post_routers >>>>> router = nb_db.add_router(received_router) >>>>> File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> line 57, in wrapper >>>>> validate_rest_input(rest_data) >>>>> File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> line 530, in validate_add_rest_input >>>>> RouterMapper._validate_external_gateway_info(rest_data) >>>>> File "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>> line 565, in _validate_external_gateway_info >>>>> message.format(key=RouterMapper.REST_ROUTER_IP_ADDRESS) >>>>> RestDataError >>>>> >>>>> >>>>> 2018-02-19 14:37 GMT+03:00 Dominik Holler : >>>>> >>>>>> >>>>>> >>>>>> On Mon, 19 Feb 2018 14:29:39 +0300 >>>>>> Aliaksei Nazarenka wrote: >>>>>> >>>>>> > How is this external gateway configured? >>>>>> > >>>>>> >>>>>> >>>>>> I created this log entry by >>>>>> https://gist.github.com/dominikholler/f58658407ae7620280f4cb >>>>>> 47c398d849 >>>>>> >>>>>> But maybe Alona will tell you how to do this with ManageIQ tomorrow. >>>>>> >>>>>> > 2018-02-19 14:27 GMT+03:00 Aliaksei Nazarenka >>>>>> > : >>>>>> > >>>>>> > > 2018-02-19 14:22:16,942 root Starting server >>>>>> > > 2018-02-19 14:22:16,943 root Version: 1.2.5-1 >>>>>> > > 2018-02-19 14:22:16,943 root Build date: 20180117090014 >>>>>> > > 2018-02-19 14:22:16,944 root Githash: 12b705d >>>>>> > > 2018-02-19 14:23:17,250 root From: 10.0.184.20:57674 Request: >>>>>> POST >>>>>> > > /v2.0/tokens >>>>>> > > 2018-02-19 14:23:17,252 root Request body: >>>>>> > > {"auth": {"tenantName": "tenant", "passwordCredentials": >>>>>> > > {"username": "admin at internal", "password": ""}}} >>>>>> > > 2018-02-19 14:23:17,322 requests.packages.urllib3.connectionpool >>>>>> > > Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>>> > > 2018-02-19 14:23:17,322 requests.packages.urllib3.connectionpool >>>>>> > > Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>>> > > 2018-02-19 14:23:17,836 requests.packages.urllib3.connectionpool >>>>>> > > Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>>> > > 2018-02-19 14:23:17,836 requests.packages.urllib3.connectionpool >>>>>> > > Starting new HTTPS connection (1): lbn-r-engine-01.mp.local >>>>>> > > 2018-02-19 14:23:17,870 root Response code: 200 >>>>>> > > 2018-02-19 14:23:17,870 root Response body: {"access": {"token": >>>>>> > > {"expires": "2018-02-23T15:23:17Z", "id": "lmceG8s1SROKskN- >>>>>> > > H4T93IPwgwSFb8mN7UJ6qz4HObrC2PqNWcSyS_dZGIcax6dEBVIz8H6ShgDX >>>>>> l_2fvflbeg"}, >>>>>> > > "serviceCatalog": [{"endpoints_links": [], "endpoints": >>>>>> > > [{"adminURL": " https://lbn-r-engine-01.mp.local:9696/", >>>>>> "region": >>>>>> > > "RegionOne", "id": " 00000000000000000000000000000001", >>>>>> > > "internalURL": " https://lbn-r-engine-01.mp.local:9696/", >>>>>> > > "publicURL": " https://lbn-r-engine-01.mp.local:9696/"}], "type": >>>>>> > > "network", "name": "neutron"}, {"endpoints_links": [], >>>>>> "endpoints": >>>>>> > > [{"adminURL": " https://lbn-r-engine-01.mp.local:35357/", >>>>>> "region": >>>>>> > > "RegionOne", "publicURL": >>>>>> > > "https://lbn-r-engine-01.mp.local:35357/", "internalURL": " >>>>>> > > https://lbn-r-engine-01.mp.local:35357/", "id": " >>>>>> > > 00000000000000000000000000000002"}], "type": "identity", "name": >>>>>> > > "keystone"}, {"endpoints_links": [], "endpoints": [{"adminURL": " >>>>>> > > https://lbn-r-engine-01.mp.local:8774/v2.1/", "region": >>>>>> > > "RegionOne", "publicURL": >>>>>> > > "https://lbn-r-engine-01.mp.local:8774/v2.1/", "internalURL": >>>>>> > > "https://lbn-r-engine-01.mp.local:8774/v2.1/", "id": " >>>>>> > > 00000000000000000000000000000002"}], "type": "compute", "name": >>>>>> > > "nova"}], "user": {"username": "admin", "roles_links": [], "id": >>>>>> > > "", "roles": [{"name": "admin"}], "name": "admin"}}} 2018-02-19 >>>>>> > > 14:23:17,974 root From: 10.0.184.20:43600 Request: >>>>>> > > POST /v2.0/routers 2018-02-19 14:23:17,974 root Request body: >>>>>> > > {"router":{"name":"test_router","external_gateway_ >>>>>> > > info":{"network_id":"17c31685-56ef-428a-94dd-3202bf407d36"," >>>>>> > > external_fixed_ips":[{"subnet_id":"c425f071-4b4e-4598-8c56- >>>>>> > > d5457a59dac3"}],"enable_snat":0}}} 2018-02-19 14:23:17,980 >>>>>> > > requests.packages.urllib3.connectionpool Starting new HTTPS >>>>>> > > connection (1): lbn-r-engine-01.mp.local 2018-02-19 14:23:17,980 >>>>>> > > requests.packages.urllib3.connectionpool Starting new HTTPS >>>>>> > > connection (1): lbn-r-engine-01.mp.local 2018-02-19 14:23:18,391 >>>>>> > > root ip_address missing in the external gateway information. >>>>>> > > Traceback (most recent call last): >>>>>> > > File "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", >>>>>> > > line 131, in _handle_request >>>>>> > > method, path_parts, content) >>>>>> > > File >>>>>> > > "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", >>>>>> line >>>>>> > > 175, in handle_request return self.call_response_handler(handler, >>>>>> > > content, parameters) File >>>>>> > > "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, in >>>>>> > > call_response_handler return response_handler(ovn_north, content, >>>>>> > > parameters) File >>>>>> > > "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", >>>>>> line >>>>>> > > 205, in post_routers router = nb_db.add_router(received_router) >>>>>> > > File "/usr/share/ovirt-provider-ovn >>>>>> /ovndb/ovn_north_mappers.py", >>>>>> > > line 57, in wrapper >>>>>> > > validate_rest_input(rest_data) >>>>>> > > File "/usr/share/ovirt-provider-ovn >>>>>> /ovndb/ovn_north_mappers.py", >>>>>> > > line 530, in validate_add_rest_input >>>>>> > > RouterMapper._validate_external_gateway_info(rest_data) >>>>>> > > File "/usr/share/ovirt-provider-ovn >>>>>> /ovndb/ovn_north_mappers.py", >>>>>> > > line 565, in _validate_external_gateway_info >>>>>> > > message.format(key=RouterMapper.REST_ROUTER_IP_ADDRESS) >>>>>> > > RestDataError >>>>>> > > >>>>>> > > >>>>>> > > It can be seen that there is no external ip for the router. But >>>>>> how >>>>>> > > to ask it and where is it done? >>>>>> > > >>>>>> > > 2018-02-19 14:05 GMT+03:00 Dominik Holler : >>>>>> > > >>>>>> > >> Hi Alexey, >>>>>> > >> can you please change level of logger_root and handler_logfile in >>>>>> > >> /etc/ovirt-provider-ovn/logger.conf >>>>>> > >> to DEBUG, restart ovirt-provider-ovn, try to create the router >>>>>> > >> again and share the logfile with us? >>>>>> > >> The logfile has to contain the relevant request, e.g. similar to >>>>>> > >> this: >>>>>> > >> >>>>>> > >> 2018-02-19 11:59:12,477 root From: 192.168.122.79:50084 Request: >>>>>> > >> POST /v2.0/routers.json 2018-02-19 11:59:12,477 root Request >>>>>> body: >>>>>> > >> {"router": {"external_gateway_info": {"network_id": >>>>>> > >> "c1d4f8e3-8b5d-464e-825a-5f615a18a900", "enable_snat": false, >>>>>> > >> "external_fixed_ips": [{"subnet_id": >>>>>> > >> "08efc369-ff36-4dd4-b5f9-ada86d7724db", "ip_address": >>>>>> > >> "10.0.0.2"}]}, "name": "add_router_router", "admin_state_up": >>>>>> > >> true}} >>>>>> > >> >>>>>> > >> Thanks, >>>>>> > >> Dominik >>>>>> > >> >>>>>> > >> >>>>>> > >> On Mon, 19 Feb 2018 11:40:02 +0300 >>>>>> > >> Aliaksei Nazarenka wrote: >>>>>> > >> >>>>>> > >> > Good afternoon! >>>>>> > >> > With the synchronization of the created networks in manageiq >>>>>> and >>>>>> > >> > ovirt everything is OK, thanks a lot! The only nuance - after >>>>>> > >> > creating a network or subnet in manageiq, you need to manually >>>>>> > >> > update the state after which you can see these items in the >>>>>> > >> > list. Is there any way to automate this process? Also, maybe >>>>>> you >>>>>> > >> > can help me: when I create a router, I get an error. Unable to >>>>>> > >> > create a Network Router "test": undefined method `[] 'for nil: >>>>>> > >> > NilClass and in the logs at this point next" 2018-02-19 11: >>>>>> 22: >>>>>> > >> > 19,391 root ip_address missing in the external gateway >>>>>> > >> > information. Traceback (most recent last call last): File >>>>>> > >> > "/usr/share/ovirt-provider-ovn/handlers/base_handler.py", line >>>>>> > >> > 131, in _handle_request method, path_parts, content) >>>>>> > >> > File >>>>>> > >> > "/usr/share/ovirt-provider-ovn/handlers/selecting_handler.py", >>>>>> > >> > line 175, in handle_request return self.call_response_handler >>>>>> > >> > (handler, content, parameters) File >>>>>> > >> > "/usr/share/ovirt-provider-ovn/handlers/neutron.py", line 36, >>>>>> in >>>>>> > >> > call_response_handler return response_handler (ovn_north, >>>>>> > >> > content, parameters) File >>>>>> > >> > "/usr/share/ovirt-provider-ovn/handlers/neutron_responses.py", >>>>>> > >> > line 205, in post_routers router = nb_db.add_router >>>>>> > >> > (received_router) File >>>>>> > >> > "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>>> line >>>>>> > >> > 57, in wrapper validate_rest_input (rest_data) >>>>>> > >> > File >>>>>> > >> > "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>>> line >>>>>> > >> > 530, in validate_add_rest_input >>>>>> > >> > RouterMapper._validate_external_gateway_info (rest_data) File >>>>>> > >> > "/usr/share/ovirt-provider-ovn/ovndb/ovn_north_mappers.py", >>>>>> line >>>>>> > >> > 565, in _validate_external_gateway_info message.format (key = >>>>>> > >> > RouterMapper.REST_ROUTER_IP_ADDRESS) RestDataError " >>>>>> > >> > Swears at the missing external ip address of the router. The >>>>>> > >> > question is how to set it? >>>>>> > >> > >>>>>> > >> > 2018-02-16 19:22 GMT+03:00 Dominik Holler >>>>> >: >>>>>> > >> > >>>>>> > >> > > Hi Alexey, >>>>>> > >> > > For the provider ovirt-provider-ovn created by engine-setup >>>>>> the >>>>>> > >> > > automatic synchronization of networks of cluster with this >>>>>> > >> > > provider as default network provider is activated by >>>>>> > >> > > engine-setup. >>>>>> > >> > > >>>>>> > >> > > Please find [1] if you want to activate this feature for >>>>>> other >>>>>> > >> > > providers, too. Additional information about controlling the >>>>>> > >> > > synchronization are available in [2]. >>>>>> > >> > > >>>>>> > >> > > Please find my question below. >>>>>> > >> > > >>>>>> > >> > > [1] >>>>>> > >> > > https://bugzilla.redhat.com/attachment.cgi?id=1397090 >>>>>> > >> > > >>>>>> > >> > > [2] >>>>>> > >> > > http://ovirt.github.io/ovirt-engine-api-model/4.2/#types/ >>>>>> > >> > > open_stack_network_provider/attributes/auto_sync >>>>>> > >> > > >>>>>> > >> > > On Fri, 16 Feb 2018 10:00:46 +0200 >>>>>> > >> > > Alona Kaplan wrote: >>>>>> > >> > > >>>>>> > >> > > > Hi Dominik, >>>>>> > >> > > > >>>>>> > >> > > > Can you please help Alexey? >>>>>> > >> > > > >>>>>> > >> > > > Thanks, >>>>>> > >> > > > Alona. >>>>>> > >> > > > >>>>>> > >> > > > On Feb 16, 2018 09:48, "Aliaksei Nazarenka" >>>>>> > >> > > > wrote: >>>>>> > >> > > > >>>>>> > >> > > > Hello! >>>>>> > >> > > > I read this - " >>>>>> > >> > > > Dominik Holler 2018-01-25 10:45:09 EST >>>>>> > >> > > > >>>>>> > >> > > > Currently, the property is only available in rest-api and >>>>>> not >>>>>> > >> > > > available in webadmin. For backward compatibility, the >>>>>> > >> > > > property is set to 'disabled' by default in rest-api and >>>>>> > >> > > > webadmin. If you think the property should be available in >>>>>> > >> > > > webadmin, please create a bug with a proposed default value >>>>>> > >> > > > to track this." >>>>>> > >> > > > >>>>>> > >> > > > and i understand this feature (auto add ovn network in >>>>>> > >> > > > ovirt) of now. How can i do to on it? I read all comments, >>>>>> > >> > > > but strangely - most of the files either do not exist for >>>>>> me >>>>>> > >> > > > or are in other places and >>>>>> > >> > > >>>>>> > >> > > Can you help me finding this comments? >>>>>> > >> > > >>>>>> > >> > > > already have the current version. Could you tell me >>>>>> > >> > > > specifically where this function is turned on? I will >>>>>> > >> > > > repeat, I use Ovirt engine >>>>>> > >> > > > 4.2.2.1-0.0.master.20180214165528.git38ff5af.el7.centos >>>>>> > >> > > > >>>>>> > >> > > > >>>>>> > >> > > > >>>>>> > >> > > > 2018-02-15 18:03 GMT+03:00 Alona Kaplan >>>>>> > >> > > > : >>>>>> > >> > > > > Currently, AFAIK there is no request to add this >>>>>> > >> > > > > functionality to manageiq. You're welcome to open a bug >>>>>> to >>>>>> > >> > > > > request it. Anyway, you can easily attach ovn networks to >>>>>> > >> > > > > vms using ovirt. >>>>>> > >> > > > > >>>>>> > >> > > > > On Feb 15, 2018 16:11, "Aliaksei Nazarenka" >>>>>> > >> > > > > wrote: >>>>>> > >> > > > > >>>>>> > >> > > > >> Is it planned to add this functionality? >>>>>> > >> > > > >> >>>>>> > >> > > > >> 2018-02-15 17:10 GMT+03:00 Alona Kaplan >>>>>> > >> > > > >> : >>>>>> > >> > > > >>> >>>>>> > >> > > > >>> >>>>>> > >> > > > >>> On Thu, Feb 15, 2018 at 4:03 PM, Aliaksei Nazarenka < >>>>>> > >> > > > >>> aliaksei.nazarenka at gmail.com> wrote: >>>>>> > >> > > > >>> >>>>>> > >> > > > >>>> and how i can change network in the created VM? >>>>>> > >> > > > >>>> >>>>>> > >> > > > >>> >>>>>> > >> > > > >>> It is not possible via manageiq. Only via ovirt. >>>>>> > >> > > > >>> >>>>>> > >> > > > >>> >>>>>> > >> > > > >>>> >>>>>> > >> > > > >>>> Sorry for my intrusive questions))) >>>>>> > >> > > > >>>> >>>>>> > >> > > > >>>> 2018-02-15 16:51 GMT+03:00 Aliaksei Nazarenka < >>>>>> > >> > > > >>>> aliaksei.nazarenka at gmail.com>: >>>>>> > >> > > > >>>> >>>>>> > >> > > > >>>>> ovirt-provider-ovn-1.2.7-0.201 >>>>>> 80213232754.gitebd60ad.el7. >>>>>> > >> > > centos.noarch >>>>>> > >> > > > >>>>> on hosted-engine >>>>>> > >> > > > >>>>> ovirt-provider-ovn-driver-1.2.5-1.el7.centos.noarch >>>>>> on >>>>>> > >> > > > >>>>> ovirt hosts >>>>>> > >> > > > >>>>> >>>>>> > >> > > > >>>>> 2018-02-15 16:40 GMT+03:00 Alona Kaplan >>>>>> > >> > > > >>>>> : >>>>>> > >> > > > >>>>>> >>>>>> > >> > > > >>>>>> >>>>>> > >> > > > >>>>>> On Thu, Feb 15, 2018 at 3:36 PM, Aliaksei Nazarenka >>>>>> > >> > > > >>>>>> < aliaksei.nazarenka at gmail.com> wrote: >>>>>> > >> > > > >>>>>> >>>>>> > >> > > > >>>>>>> when i try to create network router, i see this >>>>>> > >> > > > >>>>>>> message: *Unable to create Network Router >>>>>> > >> > > > >>>>>>> "test_router": undefined method `[]' for >>>>>> > >> > > > >>>>>>> nil:NilClass* >>>>>> > >> > > > >>>>>> >>>>>> > >> > > > >>>>>> What ovn-provider version you're using? Can you >>>>>> please >>>>>> > >> > > > >>>>>> attach the ovn provider log >>>>>> > >> > > > >>>>>> ( /var/log/ovirt-provider-ovn.log)? >>>>>> > >> > > > >>>>>> >>>>>> > >> > > > >>>>>> >>>>>> > >> > > > >>>>>>> >>>>>> > >> > > > >>>>>>> 2018-02-15 16:20 GMT+03:00 Aliaksei Nazarenka < >>>>>> > >> > > > >>>>>>> aliaksei.nazarenka at gmail.com>: >>>>>> > >> > > > >>>>>>> >>>>>> > >> > > > >>>>>>>> Big Thank you! This work! But... Networks are >>>>>> > >> > > > >>>>>>>> created, but I do not see them in the ovirt >>>>>> > >> > > > >>>>>>>> manager, but through the ovn-nbctl command, I see >>>>>> > >> > > > >>>>>>>> all the networks. And maybe you can tell me how to >>>>>> > >> > > > >>>>>>>> assign a VM network from Manageiq? >>>>>> > >> > > > >>>>>>>> >>>>>> > >> > > > >>>>>>>> 2018-02-15 15:01 GMT+03:00 Alona Kaplan >>>>>> > >> > > > >>>>>>>> : >>>>>> > >> > > > >>>>>>>> >>>>>> > >> > > > >>>>>>>>> >>>>>> > >> > > > >>>>>>>>> >>>>>> > >> > > > >>>>>>>>> On Thu, Feb 15, 2018 at 1:54 PM, Aliaksei >>>>>> > >> > > > >>>>>>>>> Nazarenka < aliaksei.nazarenka at gmail.com> wrote: >>>>>> > >> > > > >>>>>>>>> >>>>>> > >> > > > >>>>>>>>>> Error - 1 Minute Ago >>>>>> > >> > > > >>>>>>>>>> undefined method `orchestration_stacks' for >>>>>> > >> > > > >>>>>>>>>> #>>>>> > >> :InfraManager:0x00000007bf9288> >>>>>> > >> > > > >>>>>>>>>> - I get this message if I try to create a >>>>>> network >>>>>> > >> > > > >>>>>>>>>> of overts and then try to check the status of >>>>>> the >>>>>> > >> > > > >>>>>>>>>> network manager. >>>>>> > >> > > > >>>>>>>>>> >>>>>> > >> > > > >>>>>>>>> >>>>>> > >> > > > >>>>>>>>> It is the same bug. >>>>>> > >> > > > >>>>>>>>> You need to apply the fixes in >>>>>> > >> > > > >>>>>>>>> https://github.com/ManageIQ/ma >>>>>> > >> > > > >>>>>>>>> nageiq-providers-ovirt/pull/198/files to make it >>>>>> > >> > > > >>>>>>>>> work. The best option is to upgrade your version. >>>>>> > >> > > > >>>>>>>>> >>>>>> > >> > > > >>>>>>>>> >>>>>> > >> > > > >>>>>>>>>> 2018-02-15 14:28 GMT+03:00 Aliaksei Nazarenka < >>>>>> > >> > > > >>>>>>>>>> aliaksei.nazarenka at gmail.com>: >>>>>> > >> > > > >>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>> I tried to make changes to the file >>>>>> > >> > > > >>>>>>>>>>> refresher_ovn_provider.yml - changed the >>>>>> > >> > > > >>>>>>>>>>> passwords, corrected the names of the names, >>>>>> but >>>>>> > >> > > > >>>>>>>>>>> it was not successful. >>>>>> > >> > > > >>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>> 2018-02-15 14:26 GMT+03:00 Aliaksei Nazarenka < >>>>>> > >> > > > >>>>>>>>>>> aliaksei.nazarenka at gmail.com>: >>>>>> > >> > > > >>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>> Hi! >>>>>> > >> > > > >>>>>>>>>>>> I'm use oVirt 4.2.2 + Manageiq >>>>>> > >> > > > >>>>>>>>>>>> gaprindashvili-1.2018012514301 9_1450f27 >>>>>> > >> > > > >>>>>>>>>>>> After i set this commits (upstream - >>>>>> > >> > > > >>>>>>>>>>>> https://bugzilla.redhat.com/1542063) i no saw >>>>>> > >> > > > >>>>>>>>>>>> changes. >>>>>> > >> > > > >>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>> 2018-02-15 11:22 GMT+03:00 Alona Kaplan >>>>>> > >> > > > >>>>>>>>>>>> : >>>>>> > >> > > > >>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>>> Hi, >>>>>> > >> > > > >>>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>>> What version of manageiq you are using? >>>>>> > >> > > > >>>>>>>>>>>>> We had a bug >>>>>> > >> > > > >>>>>>>>>>>>> https://bugzilla.redhat.com/1542152 >>>>>> (upstream >>>>>> > >> > > > >>>>>>>>>>>>> - https://bugzilla.redhat.com/1542063) that >>>>>> > >> > > > >>>>>>>>>>>>> was fixed in version 5.9.0.20 >>>>>> > >> > > > >>>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>>> Please let me know it upgrading the version >>>>>> > >> > > > >>>>>>>>>>>>> helped you. >>>>>> > >> > > > >>>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>>> Thanks, >>>>>> > >> > > > >>>>>>>>>>>>> Alona. >>>>>> > >> > > > >>>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>>> On Wed, Feb 14, 2018 at 2:32 PM, Aliaksei >>>>>> > >> > > > >>>>>>>>>>>>> Nazarenka < aliaksei.nazarenka at gmail.com> >>>>>> > >> > > > >>>>>>>>>>>>> wrote: >>>>>> > >> > > > >>>>>>>>>>>>>> Good afternoon! >>>>>> > >> > > > >>>>>>>>>>>>>> I read your article - >>>>>> > >> > > > >>>>>>>>>>>>>> https://www.ovirt.org/develop/ >>>>>> > >> > > > >>>>>>>>>>>>>> release-management/features/ne >>>>>> twork/manageiq_ovn/. >>>>>> > >> > > > >>>>>>>>>>>>>> I have only one question: how to create a >>>>>> > >> > > > >>>>>>>>>>>>>> network or subnet in Manageiq + ovirt 4.2.1. >>>>>> > >> > > > >>>>>>>>>>>>>> When I try to create a network, I need to >>>>>> > >> > > > >>>>>>>>>>>>>> select a tenant, but there is nothing that I >>>>>> > >> > > > >>>>>>>>>>>>>> could choose. How can it be? >>>>>> > >> > > > >>>>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>>>> Sincerely. Alexey Nazarenko >>>>>> > >> > > > >>>>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>>> >>>>>> > >> > > > >>>>>>>>>> >>>>>> > >> > > > >>>>>>>>> >>>>>> > >> > > > >>>>>>>> >>>>>> > >> > > > >>>>>>> >>>>>> > >> > > > >>>>>> >>>>>> > >> > > > >>>>> >>>>>> > >> > > > >>>> >>>>>> > >> > > > >>> >>>>>> > >> > > > >> >>>>>> > >> > > >>>>>> > >> > > >>>>>> > >> >>>>>> > >> >>>>>> > > >>>>>> >>>>>> >>>>> >>>> >>> >> > -- Mor Kalfon RHV Networking Team Red Hat IL-Raanana Tel: +972-54-6514148 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuriku at shurik.kiev.ua Thu Feb 22 09:37:34 2018 From: shuriku at shurik.kiev.ua (Alexandr Krivulya) Date: Thu, 22 Feb 2018 11:37:34 +0200 Subject: [ovirt-users] oVirt 4.2: hostdev passthrough not working any more In-Reply-To: <20180222093008.GA8924@Alexandra.local> References: <3b229ed8-23c9-010e-0d2f-41bf66653933@shurik.kiev.ua> <20180222093008.GA8924@Alexandra.local> Message-ID: 22.02.2018 11:30, Martin Polednik ?????: > On 22/02/18 10:40 +0200, Alexandr Krivulya wrote: >> Hello, the same problem after upgrade to 4.2.1 :( >> >> >> 18.01.2018 11:53, Daniel Helgenberger ?????: >>> Hello, >>> >>> yesterday I upgraded to 4.2.0 from 4.1.8. >>> >>> Now I notice I cannot assign host dev pass though any more; in the GUI >>> the 'Pinnded to host' list is empty; I cannot select any host for pass >>> through host pinning. > > Does the host that previously worked report device passthrough > capability? In the UI it's the "Device Passthrough: Enabled" field. > (and similarly named field in vdsm-client getCapabilities call) Now "Device Passthrough: Disabled", but usb passthrough works well on 4.1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabrielstein at gmail.com Thu Feb 22 09:50:45 2018 From: gabrielstein at gmail.com (Gabriel Stein) Date: Thu, 22 Feb 2018 10:50:45 +0100 Subject: [ovirt-users] Network and VLANs Message-ID: Hi all, I have some problems adding VLANs to my VMs and I don't known if there is a better way to do that, like a 'oVirt Way'. All I need is to have a VM on "Test Network" that communicates with another hardware/VMs on "Test Network". All VLANs are configured on my Switch, the Hosts from oVirt are connected and tagged to this VLANs. Is there a "oVirt Way" to do that other than "Setup Networks"? Can I use with oVirt an Virtual Switch? Or a Network Stack? I wrote this Bug about my Problem... Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1540463 Thanks in Advance! Best Regards, Gabriel Gabriel Stein ------------------------------ Gabriel Ferraz Stein Tel.: +49 (0) 170 2881531 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabrielstein at gmail.com Thu Feb 22 09:52:24 2018 From: gabrielstein at gmail.com (Gabriel Stein) Date: Thu, 22 Feb 2018 10:52:24 +0100 Subject: [ovirt-users] Network and VLANs In-Reply-To: References: Message-ID: s/ wrote this Bug about my Problem/There is a Bug with my Problem"/g Gabriel Stein ------------------------------ Gabriel Ferraz Stein Tel.: +49 (0) 170 2881531 2018-02-22 10:50 GMT+01:00 Gabriel Stein : > Hi all, > > I have some problems adding VLANs to my VMs and I don't known if there is > a better way to do that, like a 'oVirt Way'. > > All I need is to have a VM on "Test Network" that communicates with > another hardware/VMs on "Test Network". All VLANs are configured on my > Switch, the Hosts from oVirt are connected and tagged to this VLANs. > > Is there a "oVirt Way" to do that other than "Setup Networks"? Can I use > with oVirt an Virtual Switch? Or a Network Stack? > > I wrote this Bug about my Problem... > > Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1540463 > > Thanks in Advance! > > Best Regards, > > Gabriel > > > > Gabriel Stein > ------------------------------ > Gabriel Ferraz Stein > Tel.: +49 (0) 170 2881531 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Thu Feb 22 10:09:56 2018 From: ahadas at redhat.com (Arik Hadas) Date: Thu, 22 Feb 2018 12:09:56 +0200 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> Message-ID: So I have some good news and some bad news. The good news is that I just used the provided OVA and identified the issues that prevent oVirt from processing its OVF configuration: 1. The element in the References section lacks ovf:size attribute and oVirt unfortunately, is not prepared for this. 2. The USB item doesn't include an oVirt-specific attribute (makes sense..) that oVirt require (that doesn't make sense..) called usbPolicy. I'll post fixes for those issues. In the meantime, the OVF can be modified with the following changes: 1. Add ovf:size="3221225472" to the File element (there's no need for more than 3gb, even 2gb should be enough). 2. Remove the following Item: 0 usb USB Controller usb 6 23 The bad news is that the conversion that would finally start with those changes then fails on my host (with virt-v2v v1.36.3) with the following error: supermin: failed to find a suitable kernel (host_cpu=x86_64). I looked for kernels in /boot and modules in /lib/modules. If this is a Xen guest, and you only have Xen domU kernels installed, try installing a fullvirt kernel (only for supermin use, you shouldn't boot the Xen guest with it). libguestfs: trace: v2v: launch = -1 (error) @Richard, this is an OVA of a VM installed with Debian64 as guest OS that was exported from VirtualBox, is it supported by virt-v2v? On Wed, Feb 21, 2018 at 7:10 PM, Ji?? Sl??ka wrote: > On 02/21/2018 05:35 PM, Arik Hadas wrote: > > > > > > On Wed, Feb 21, 2018 at 6:03 PM, Ji?? Sl??ka > > wrote: > > > > On 02/21/2018 03:43 PM, Ji?? Sl??ka wrote: > > > On 02/20/2018 11:09 PM, Arik Hadas wrote: > > >> > > >> > > >> On Tue, Feb 20, 2018 at 6:37 PM, Ji?? Sl??ka > > > >> >> wrote: > > >> > > >> On 02/20/2018 03:48 PM, Arik Hadas wrote: > > >> > > > >> > > > >> > On Tue, Feb 20, 2018 at 3:49 PM, Ji?? Sl??ka > > > > > > > >> > > > >>> wrote: > > >> > > > >> > Hi Arik, > > >> > > > >> > On 02/20/2018 01:22 PM, Arik Hadas wrote: > > >> > > > > >> > > > > >> > > On Tue, Feb 20, 2018 at 2:03 PM, Ji?? Sl??ka > > > > > > > >> > > >> > > >> > > > > > > > >> > > >>>> wrote: > > >> > > > > >> > > Hi, > > >> > > > > >> > > > > >> > > Hi Ji??, > > >> > > > > >> > > > > >> > > > > >> > > I would like to try import some ova files into > > our oVirt > > >> instance [1] > > >> > > [2] but I facing problems. > > >> > > > > >> > > I have downloaded all ova images into one of hosts > > >> (ovirt01) into > > >> > > direcory /ova > > >> > > > > >> > > ll /ova/ > > >> > > total 6532872 > > >> > > -rw-r--r--. 1 vdsm kvm 1160387072 Feb 16 16:21 > > >> HAAS-hpcowrie.ovf > > >> > > -rw-r--r--. 1 vdsm kvm 1111785984 Feb 16 16:22 > > >> HAAS-hpdio.ova > > >> > > -rw-r--r--. 1 vdsm kvm 846736896 Feb 16 16:22 > > >> HAAS-hpjdwpd.ova > > >> > > -rw-r--r--. 1 vdsm kvm 891043328 Feb 16 16:23 > > >> HAAS-hptelnetd.ova > > >> > > -rw-r--r--. 1 vdsm kvm 908222464 Feb 16 16:23 > > >> HAAS-hpuchotcp.ova > > >> > > -rw-r--r--. 1 vdsm kvm 880643072 Feb 16 16:24 > > >> HAAS-hpuchoudp.ova > > >> > > -rw-r--r--. 1 vdsm kvm 890833920 Feb 16 16:24 > > >> HAAS-hpuchoweb.ova > > >> > > > > >> > > Then I tried to import them - from host ovirt01 > and > > >> directory /ova but > > >> > > spinner spins infinitly and nothing is happen. > > >> > > > > >> > > > > >> > > And does it work when you provide a path to the > > actual ova > > >> file, i.e., > > >> > > /ova/HAAS-hpdio.ova, rather than to the directory? > > >> > > > >> > this time it ends with "Failed to load VM configuration > > from > > >> OVA file: > > >> > /ova/HAAS-hpdio.ova" error. > > >> > > > >> > > > >> > Note that the logic that is applied on a specified folder > > is "try > > >> > fetching an 'ova folder' out of the destination folder" > > rather than > > >> > "list all the ova files inside the specified folder". It > seems > > >> that you > > >> > expected the former output since there are no disks in that > > >> folder, right? > > >> > > >> yes, It would be more user friendly to list all ova files and > > then > > >> select which one to import (like listing all vms in vmware > > import) > > >> > > >> Maybe description of path field in manager should be "Path to > > ova file" > > >> instead of "Path" :-) > > >> > > >> > > >> Sorry, I obviously meant 'latter' rather than 'former' before.. > > >> Yeah, I agree that would be better, at least until listing the > > OVA files > > >> in the folder is implemented (that was the original plan, btw) - > > could > > >> you please file a bug? > > > > > > yes, sure > > > > > > > > >> > > I cannot see anything relevant in vdsm log of > > host ovirt01. > > >> > > > > >> > > In the engine.log of our standalone ovirt manager > > is just this > > >> > > relevant line > > >> > > > > >> > > 2018-02-20 12:35:04,289+01 INFO > > >> > > > > [org.ovirt.engine.core.common.utils.ansible.AnsibleExecutor] > (default > > >> > > task-31) [458990a7-b054-491a-904e-5c4fe44892c4] > > Executing Ansible > > >> > > command: ANSIBLE_STDOUT_CALLBACK=ovaqueryplugin > > >> > > [/usr/bin/ansible-playbook, > > >> > > > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa, > > >> > > > > --inventory=/tmp/ansible-inventory8237874608161160784, > > >> > > --extra-vars=ovirt_query_ova_path=/ova, > > >> > > > > /usr/share/ovirt-engine/playbooks/ovirt-ova-query.yml] [Logfile: > > >> > > > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > > > > >> > > > > >> > > > > > > >> > >> > > >> > > > > > >> > > > > >> > > > > >> > >>>.slu.cz.log] > > >> > > > > >> > > also there are two ansible processes which are > > still running > > >> > (and makes > > >> > > heavy load on system (load 9+ and growing, it > > looks like it > > >> > eats all the > > >> > > memory and system starts swapping)) > > >> > > > > >> > > ovirt 32087 3.3 0.0 332252 5980 ? Sl > > >> 12:35 0:41 > > >> > > /usr/bin/python2 /usr/bin/ansible-playbook > > >> > > > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > > >> > > --inventory=/tmp/ansible- > inventory8237874608161160784 > > >> > > --extra-vars=ovirt_query_ova_path=/ova > > >> > > /usr/share/ovirt-engine/ > playbooks/ovirt-ova-query.yml > > >> > > ovirt 32099 57.5 78.9 15972880 11215312 ? R > > > >> 12:35 11:52 > > >> > > /usr/bin/python2 /usr/bin/ansible-playbook > > >> > > > > --private-key=/etc/pki/ovirt-engine/keys/engine_id_rsa > > >> > > --inventory=/tmp/ansible- > inventory8237874608161160784 > > >> > > --extra-vars=ovirt_query_ova_path=/ova > > >> > > /usr/share/ovirt-engine/ > playbooks/ovirt-ova-query.yml > > >> > > > > >> > > playbook looks like > > >> > > > > >> > > - hosts: all > > >> > > remote_user: root > > >> > > gather_facts: no > > >> > > > > >> > > roles: > > >> > > - ovirt-ova-query > > >> > > > > >> > > and it looks like it only runs query_ova.py but > > on all > > >> hosts? > > >> > > > > >> > > > > >> > > No, the engine provides ansible the host to run on > > when it > > >> > executes the > > >> > > playbook. > > >> > > It would only be executed on the selected host. > > >> > > > > >> > > > > >> > > > > >> > > How does this work? ...or should it work? > > >> > > > > >> > > > > >> > > It should, especially that part of querying the OVA > > and is > > >> supposed to > > >> > > be really quick. > > >> > > Can you please share the engine log and > > >> > > > > >> > > > >> > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > > > > >> > > > > >> > > > > > > >> > >> > > >> > > > > > >> > > > > >> > > > > >> > >>>.slu.cz.log ? > > >> > > > >> > engine log is here: > > >> > > > >> > https://pastebin.com/nWWM3UUq > > >> > > > >> > > > >> > Thanks. > > >> > Alright, so now the configuration is fetched but its > > processing fails. > > >> > We fixed many issues in this area recently, but it appears > that > > >> > something is wrong with the actual size of the disk within > > the ovf file > > >> > that resides inside this ova file. > > >> > Can you please share that ovf file that resides > > inside /ova/HAAS-hpdio.ova? > > >> > > >> file HAAS-hpdio.ova > > >> HAAS-hpdio.ova: POSIX tar archive (GNU) > > >> > > >> [root at ovirt01 backup]# tar xvf HAAS-hpdio.ova > > >> HAAS-hpdio.ovf > > >> HAAS-hpdio-disk001.vmdk > > >> > > >> file HAAS-hpdio.ovf is here: > > >> > > >> https://pastebin.com/80qAU0wB > > >> > > >> > > >> Thanks again. > > >> So that seems to be a VM that was exported from Virtual Box, > right? > > >> They don't do anything that violates the OVF specification but > > they do > > >> some non-common things that we don't anticipate: > > > > > > yes, it is most likely ova from VirtualBox > > > > > >> First, they don't specify the actual size of the disk and the > current > > >> code in oVirt relies on that property. > > >> There is a workaround for this though: you can extract an OVA > > file, edit > > >> its OVF configuration - adding ovf:populatedSize="X" (and change > > >> ovf:capacity as I'll describe next) to the Disk element inside the > > >> DiskSection and pack the OVA again (tar cvf > > > >> X is either: > > >> 1. the actual size of the vmdk file + some buffer (iirc, we used > > to take > > >> 15% of extra space for the conversion) > > >> 2. if you're using a file storage or you don't mind consuming more > > >> storage space on your block storage, simply set X to the virtual > > size of > > >> the disk (in bytes) as indicated by the ovf:capacity filed, e.g., > > >> ovf:populatedSize="21474836480" in the case of HAAS-hpdio.ova. > > >> > > >> Second, the virtual size (indicated by ovf:capacity) is specified > in > > >> bytes. The specification says that the default unit of allocation > > shall > > >> be bytes, but practically every OVA file that I've ever saw > > specified it > > >> in GB and the current code in oVirt kind of assumes that this is > the > > >> case without checking the ovf:capacityAllocationUnits attribute > that > > >> could indicate the real unit of allocation [1]. > > >> Anyway, long story short, the virtual size of the disk should > > currently > > >> be specified in GB, e.g., ovf:populatedSize="20" in the case of > > >> HAAS-hpdio.ova. > > > > > > wow, thanks for this excellent explanation. I have changed this in > > ovf file > > > > > > ... > > > > ovf:populatedSize="20" ... > > > ... > > > > > > then I was able to import this mofified ova file > (HAAS-hpdio_new.ova). > > > Interesting thing is that the vm was shown in vm list for while > (with > > > state down with lock and status was initializing). After while > this vm > > > disapeared :-o > > > > > > I am going to test it again and collect some logs... > > > > there are interesting logs in /var/log/vdsm/import/ at the host used > for > > import > > > > http://mirror.slu.cz/tmp/ovirt-import.tar.bz2 > > > > > > first of them describes situation where I chose thick provisioning, > > second situation with thin provisioning > > > > interesting part is I believe > > > > libguestfs: command: run: qemu-img > > libguestfs: command: run: \ create > > libguestfs: command: run: \ -f qcow2 > > libguestfs: command: run: \ -o preallocation=off,compat=0.10 > > libguestfs: command: run: \ > > /rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd-a570- > f37fa986a772/images/d44e1890-3e42-420b-939c-dac1290e19af/ > 9edcccbc-b244-4b94-acd3-3c8ee12bbbec > > libguestfs: command: run: \ 21474836480 > > Formatting > > '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd- > a570-f37fa986a772/images/d44e1890-3e42-420b-939c- > dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec', > > fmt=qcow2 size=21474836480 compat=0.10 encryption=off > cluster_size=65536 > > preallocation=off lazy_refcounts=off refcount_bits=16 > > libguestfs: trace: vdsm_disk_create: disk_create = 0 > > qemu-img 'convert' '-p' '-n' '-f' 'qcow2' '-O' 'qcow2' > > '/var/tmp/v2vovl2dccbd.qcow2' > > '/rhev/data-center/mnt/blockSD/088e7ed9-84c7-4fbd- > a570-f37fa986a772/images/d44e1890-3e42-420b-939c- > dac1290e19af/9edcccbc-b244-4b94-acd3-3c8ee12bbbec' > > qemu-img: error while writing sector 1000960: No space left on device > > > > virt-v2v: error: qemu-img command failed, see earlier errors > > > > > > > > Sorry again, I made a mistake in: > > "Anyway, long story short, the virtual size of the disk should currently > > be specified in GB, e.g., ovf:populatedSize="20" in the case of > > HAAS-hpdio.ova." > > I should have write ovf:capacity="20". > > So if you wish the actual size of the disk to be 20GB (which means the > > disk is preallocated), the disk element should be set with: > > > ovf:populatedSize="21474836480" ... > > > now I have this inf ovf file > > ovf:populatedSize="21474836480"... > > but while import it fails again, but in this case faster. It looks like > SPM cannot create disk image > > log from SPM host... > > 2018-02-21 18:02:03,599+0100 INFO (jsonrpc/1) [vdsm.api] START > createVolume(sdUUID=u'69f6b3e7-d754-44cf-a665-9d7128260401', > spUUID=u'00000002-0002-0002-0002-0000000002b9', > imgUUID=u'0a5c4ecb-2c04-4f96-858a-4f74915d5caa', size=u'20', > volFormat=4, preallocate=2, diskType=u'DATA', > volUUID=u'bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0', > desc=u'{"DiskAlias":"HAAS-hpdio-disk001.vmdk","DiskDescription":""}', > srcImgUUID=u'00000000-0000-0000-0000-000000000000', > srcVolUUID=u'00000000-0000-0000-0000-000000000000', > initialSize=u'21474836480') from=::ffff:193.84.206.172,53154, > flow_id=e27cd35a-dc4e-4e72-a3ef-aa5b67c2bdab, > task_id=e7598aa1-420a-4612-9ee8-03012b1277d9 (api:46) > 2018-02-21 18:02:03,603+0100 INFO (jsonrpc/1) [IOProcessClient] > Starting client ioprocess-3931 (__init__:330) > 2018-02-21 18:02:03,638+0100 INFO (ioprocess/56120) [IOProcess] > Starting ioprocess (__init__:452) > 2018-02-21 18:02:03,661+0100 INFO (jsonrpc/1) [vdsm.api] FINISH > createVolume return=None from=::ffff:193.84.206.172,53154, > flow_id=e27cd35a-dc4e-4e72-a3ef-aa5b67c2bdab, > task_id=e7598aa1-420a-4612-9ee8-03012b1277d9 (api:52) > 2018-02-21 18:02:03,692+0100 INFO (jsonrpc/1) [jsonrpc.JsonRpcServer] > RPC call Volume.create succeeded in 0.09 seconds (__init__:573) > 2018-02-21 18:02:03,694+0100 INFO (tasks/1) > [storage.ThreadPool.WorkerThread] START task > e7598aa1-420a-4612-9ee8-03012b1277d9 (cmd= >, args=None) > (threadPool:208) > 2018-02-21 18:02:03,995+0100 INFO (tasks/1) [storage.StorageDomain] > Create placeholder > /rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665- > 9d7128260401/images/0a5c4ecb-2c04-4f96-858a-4f74915d5caa > for image's volumes (sd:1244) > 2018-02-21 18:02:04,016+0100 INFO (tasks/1) [storage.Volume] Creating > volume bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 (volume:1151) > 2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] The > requested initial 21474836480 is bigger than the max size 134217728 > (blockVolume:345) > 2018-02-21 18:02:04,060+0100 ERROR (tasks/1) [storage.Volume] Failed to > create volume > /rhev/data-center/mnt/blockSD/69f6b3e7-d754-44cf-a665- > 9d7128260401/images/0a5c4ecb-2c04-4f96-858a-4f74915d5caa/ > bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0: > Invalid parameter: 'initial size=41943040' (volume:1175) > 2018-02-21 18:02:04,061+0100 ERROR (tasks/1) [storage.Volume] Unexpected > error (volume:1215) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line > 1172, in create > initialSize=initialSize) > File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", > line 501, in _create > size, initialSize) > File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", > line 545, in calculate_volume_alloc_size > preallocate, capacity, initial_size) > File "/usr/lib/python2.7/site-packages/vdsm/storage/blockVolume.py", > line 347, in calculate_volume_alloc_size > initial_size) > InvalidParameterException: Invalid parameter: 'initial size=41943040' > 2018-02-21 18:02:04,062+0100 ERROR (tasks/1) [storage.TaskManager.Task] > (Task='e7598aa1-420a-4612-9ee8-03012b1277d9') Unexpected error (task:875) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line > 882, in _run > return fn(*args, **kargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line > 336, in run > return self.cmd(*self.argslist, **self.argsdict) > File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py", > line 79, in wrapper > return method(self, *args, **kwargs) > File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line 1936, > in createVolume > initialSize=initialSize) > File "/usr/lib/python2.7/site-packages/vdsm/storage/sd.py", line 801, > in createVolume > initialSize=initialSize) > File "/usr/lib/python2.7/site-packages/vdsm/storage/volume.py", line > 1217, in create > (volUUID, e)) > VolumeCreationError: Error creating a new volume: (u"Volume creation > bd3ae91a-3b37-4610-9ad3-6c5fdc6cc9b0 failed: Invalid parameter: 'initial > size=41943040'",) > > there are no new logs in import folder on host used for import... > > > > > > > > > > > > > > >> That should do it. If not, please share the OVA file and I will > > examine > > >> it in my environment. > > > > > > original file is at > > > > > > https://haas.cesnet.cz/downloads/release-01/HAAS-hpdio.ova > > > > > > > >> > > >> > > [1] https://github.com/oVirt/ovirt-engine/blob/master/ > backend/manager/modules/utils/src/main/java/org/ovirt/ > engine/core/utils/ovf/OvfOvaReader.java#L220 > > backend/manager/modules/utils/src/main/java/org/ovirt/ > engine/core/utils/ovf/OvfOvaReader.java#L220> > > >> > > >> > > >> > > >> > file > > >> > > > >> > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220123504-ovirt01.net > > > > >> > > > > >> > > > > > > >> > >> > > >> > in the fact does not exists (nor folder > > /var/log/ovirt-engine/ova/) > > >> > > > >> > > > >> > This issue is also resolved in 4.2.2. > > >> > In the meantime, please create the > > /var/log/ovirt-engine/ova/ folder > > >> > manually and make sure its permissions match the ones of > > the other > > >> > folders in /var/log/ovirt-engine. > > >> > > >> ok, done. After another try there is this log file > > >> > > >> > > /var/log/ovirt-engine/ova/ovirt-query-ova-ansible- > 20180220173005-ovirt01.net > > > > >> > >.slu.cz.log > > >> > > >> https://pastebin.com/M5J44qur > > >> > > >> > > >> Is it the log of the execution of the ansible playbook that was > > provided > > >> with a path to the /ova folder? > > >> I'm interested in that in order to see how comes that its > execution > > >> never completed. > > > > > > well, I dont think so, it is log from import with full path to ova > > file > > > > > > > > > > > >> > > >> > > >> > > >> > > >> > Cheers, > > >> > > > >> > Jiri Slezka > > >> > > > >> > > > > >> > > > > >> > > > > >> > > I am using latest 4.2.1.7-1.el7.centos version > > >> > > > > >> > > Cheers, > > >> > > Jiri Slezka > > >> > > > > >> > > > > >> > > [1] https://haas.cesnet.cz/#!index.md > > > > >> > > > > >> > > > index.md>>> > > >> > > > > > index.md>> > > >> > > > > >> > >>> - Cesnet HAAS > > >> > > [2] https://haas.cesnet.cz/downloads/release-01/ > > > > >> > > > > >> > > > > >> > >> > > >> > > > > > >> > > > > >> > > > > >> > >>> - Image repository > > >> > > > > >> > > > > >> > > _______________________________________________ > > >> > > Users mailing list > > >> > > Users at ovirt.org > > > > > > > >> >> > > >> > > > > > > >> > > >>> > > >> > > http://lists.ovirt.org/mailman/listinfo/users > > > > >> > > > > >> > > > > >> > >> > > >> > > > > > >> > > > > >> > > > > >> > >>> > > >> > > > > >> > > > > >> > > > >> > > > >> > > > >> > _______________________________________________ > > >> > Users mailing list > > >> > Users at ovirt.org > > > > > >> > > >> > > >> > http://lists.ovirt.org/mailman/listinfo/users > > > > >> > > > > >> > > > > >> > >> > > >> > > > >> > > > >> > > >> > > >> > > >> _______________________________________________ > > >> Users mailing list > > >> Users at ovirt.org > > > > > >> http://lists.ovirt.org/mailman/listinfo/users > > > > >> > > > > >> > > >> > > > > > > > > > > > > > > > _______________________________________________ > > > Users mailing list > > > Users at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sakhi at sanren.ac.za Thu Feb 22 10:15:54 2018 From: sakhi at sanren.ac.za (Sakhi Hadebe) Date: Thu, 22 Feb 2018 12:15:54 +0200 Subject: [ovirt-users] Ovirt Cluster Setup In-Reply-To: References: Message-ID: Hi, I restarted everything from scratch and followed this article http://blogs-ramesh.blogspot.co.za/2016/01/ovirt-and-gluster-hyperconvergence.html . Thanks for your quick response Kasturi and Sahina On Wed, Feb 21, 2018 at 8:54 AM, Kasturi Narra wrote: > Hello sakhi, > > Can you please let us know what is the script it is failing > at ? > > Thanks > kasturi > > On Tue, Feb 20, 2018 at 1:05 PM, Sakhi Hadebe wrote: > >> I have 3 Dell R515 servers all installed with centOS 7, and trying to >> setup an oVirt Cluster. >> >> Disks configurations: >> 2 x 1TB - Raid1 - OS Deployment >> 6 x 1TB - Raid 6 - Storage >> >> ?Memory is 128GB >> >> I am following this documentation https://www.ovir >> t.org/blog/2017/04/up-and-running-with-ovirt-4.1-and-gluster-storage/ >> and I am getting the issue below: >> >> PLAY [gluster_servers] ****************************** >> *************************** >> >> TASK [Run a shell script] ****************************** >> ************************ >> fatal: [ovirt2.sanren.ac.za]: FAILED! => {"msg": "The conditional check >> 'result.rc != 0' failed. The error was: error while evaluating conditional >> (result.rc != 0): 'dict object' has no attribute 'rc'"} >> fatal: [ovirt3.sanren.ac.za]: FAILED! => {"msg": "The conditional check >> 'result.rc != 0' failed. The error was: error while evaluating conditional >> (result.rc != 0): 'dict object' has no attribute 'rc'"} >> fatal: [ovirt1.sanren.ac.za]: FAILED! => {"msg": "The conditional check >> 'result.rc != 0' failed. The error was: error while evaluating conditional >> (result.rc != 0): 'dict object' has no attribute 'rc'"} >> to retry, use: --limit @/tmp/tmpxFXyGG/run-script.retry >> >> PLAY RECAP ************************************************************ >> ********* >> ovirt1.sanren.ac.za : ok=0 changed=0 unreachable=0 >> failed=1 >> ovirt2.sanren.ac.za : ok=0 changed=0 unreachable=0 >> failed=1 >> ovirt3.sanren.ac.za : ok=0 changed=0 unreachable=0 >> failed=1 >> >> *Error: Ansible(>= 2.2) is not installed.* >> *Some of the features might not work if not installed.* >> >> >> ?I have installed ansible2.4 in all the servers, but the error persists. >> >> Is there anything I can do to get rid of this error? >> -- >> Regards, >> Sakhi Hadebe >> >> Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR >> >> Tel: +27 12 841 2308 <+27128414213> >> Fax: +27 12 841 4223 <+27128414223> >> Cell: +27 71 331 9622 <+27823034657> >> Email: sakhi at sanren.ac.za >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -- Regards, Sakhi Hadebe Engineer: South African National Research Network (SANReN)Competency Area, Meraka, CSIR Tel: +27 12 841 2308 <+27128414213> Fax: +27 12 841 4223 <+27128414223> Cell: +27 71 331 9622 <+27823034657> Email: sakhi at sanren.ac.za -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjones at redhat.com Thu Feb 22 10:22:50 2018 From: rjones at redhat.com (Richard W.M. Jones) Date: Thu, 22 Feb 2018 10:22:50 +0000 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> Message-ID: <20180222102250.GG2787@redhat.com> On Thu, Feb 22, 2018 at 12:09:56PM +0200, Arik Hadas wrote: > supermin: failed to find a suitable kernel (host_cpu=x86_64). Please run ?libguestfs-test-tool? and attach the complete output. > @Richard, this is an OVA of a VM installed with Debian64 as guest OS that > was exported from VirtualBox, is it supported by virt-v2v? No, we only support OVAs exported from VMware. OVF isn't a real standard, it's a ploy by VMware to pretend that they conform to standards. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html From jiri.slezka at slu.cz Thu Feb 22 12:27:18 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Thu, 22 Feb 2018 13:27:18 +0100 Subject: [ovirt-users] problem importing ova vm In-Reply-To: <20180222102250.GG2787@redhat.com> References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> <20180222102250.GG2787@redhat.com> Message-ID: On 02/22/2018 11:22 AM, Richard W.M. Jones wrote: > On Thu, Feb 22, 2018 at 12:09:56PM +0200, Arik Hadas wrote: >> supermin: failed to find a suitable kernel (host_cpu=x86_64). > > Please run ?libguestfs-test-tool? and attach the complete output. libguestfs-test-tool ************************************************************ * IMPORTANT NOTICE * * When reporting bugs, include the COMPLETE, UNEDITED * output below in your bug report. * ************************************************************ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin XDG_RUNTIME_DIR=/run/user/0 SELinux: Enforcing guestfs_get_append: (null) guestfs_get_autosync: 1 guestfs_get_backend: libvirt guestfs_get_backend_settings: [] guestfs_get_cachedir: /var/tmp guestfs_get_direct: 0 guestfs_get_hv: /usr/libexec/qemu-kvm guestfs_get_memsize: 500 guestfs_get_network: 0 guestfs_get_path: /usr/lib64/guestfs guestfs_get_pgroup: 0 guestfs_get_program: libguestfs-test-tool guestfs_get_recovery_proc: 1 guestfs_get_smp: 1 guestfs_get_sockdir: /tmp guestfs_get_tmpdir: /tmp guestfs_get_trace: 0 guestfs_get_verbose: 1 host_cpu: x86_64 Launching appliance, timeout set to 600 seconds. libguestfs: launch: program=libguestfs-test-tool libguestfs: launch: version=1.36.3rhel=7,release=6.el7_4.3,libvirt libguestfs: launch: backend registered: unix libguestfs: launch: backend registered: uml libguestfs: launch: backend registered: libvirt libguestfs: launch: backend registered: direct libguestfs: launch: backend=libvirt libguestfs: launch: tmpdir=/tmp/libguestfsmsimNR libguestfs: launch: umask=0022 libguestfs: launch: euid=0 libguestfs: libvirt version = 3002000 (3.2.0) libguestfs: guest random name = guestfs-ii13o2gd48kt6mrz libguestfs: connect to libvirt libguestfs: opening libvirt handle: URI = qemu:///system, auth = default+wrapper, flags = 0 libvirt needs authentication to connect to libvirt URI qemu:///system (see also: http://libvirt.org/auth.html http://libvirt.org/uri.html) (not sure if you need information after authentication (and I am not sure which credentials it needs)) > >> @Richard, this is an OVA of a VM installed with Debian64 as guest OS that >> was exported from VirtualBox, is it supported by virt-v2v? > > No, we only support OVAs exported from VMware. OVF isn't a > real standard, it's a ploy by VMware to pretend that they > conform to standards. :-) maybe supporting import from VirtualBox is the way to lowering VmWare importance :-) Cheers, Jiri -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From rjones at redhat.com Thu Feb 22 12:58:21 2018 From: rjones at redhat.com (Richard W.M. Jones) Date: Thu, 22 Feb 2018 12:58:21 +0000 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> <20180222102250.GG2787@redhat.com> Message-ID: <20180222125821.GH2787@redhat.com> On Thu, Feb 22, 2018 at 01:27:18PM +0100, Ji?? Sl??ka wrote: > libvirt needs authentication to connect to libvirt URI qemu:///system > (see also: http://libvirt.org/auth.html http://libvirt.org/uri.html) You can set the backend to direct to avoid needing libvirt: export LIBGUESTFS_BACKEND=direct Alternately you can fiddle with the libvirt polkit configuration to permit access: https://libvirt.org/aclpolkit.html Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v From msteele at telvue.com Thu Feb 22 13:14:45 2018 From: msteele at telvue.com (Mark Steele) Date: Thu, 22 Feb 2018 08:14:45 -0500 Subject: [ovirt-users] Unable to add Hosts to Cluster In-Reply-To: References: Message-ID: We have found a resolution to this issue which is a bit convoluted - and still does not explain why this started in the first place. Once we have prepped a HV server to be added (all the NIC's are ready, selinux, networkmanager, firewalld, etc have been disabled, we have to do the following: yum install librbd1 rm /etc/yum.repos.d/CentOS-Base.repo (yes... delete the base repo...) vi /etc/yum.repos.d/CentOS-Vault.repo add the following: [vault] name=CentOS-$releasever - Extras #mirrorlist=http://vault.centos.org/?release=$releasever&arch=$basearch&repo=extras baseurl=http://vault.centos.org/centos/7.0.1406/os/x86_64/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 vi /etc/yum.repos.d/ovirt-3.5-dependencies.repo edit baseurls as show below: [ovirt-3.5-glusterfs-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=https://download.gluster.org/pub/gluster/glusterfs/old-releases/3.6/3.6.1/RHEL/epel-7/x86_64 enabled=1 skip_if_unavailable=1 gpgcheck=0 [ovirt-3.5-glusterfs-noarch-epel] name=GlusterFS is a clustered file-system capable of scaling to several petabytes. baseurl=http://download.gluster.org/pub/gluster/glusterfs/old-releases/3.6/3.6.1/RHEL/epel-7/noarch #baseurl=http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/epel-$releasever/noarch enabled=1 skip_if_unavailable=1 gpgcheck=0 yum remove ovirt-release35 yum remove vdsm yum remove libvirt yum install ovirt-release35-002-1 yum install libvirt-1.1.1-29.el7 yum install vdsm-4.16.7-1.gitdb83943.el7 We can then successfully add the HV into the cluster. This is using CentOS 7.0.1406 (Core) as the host OS and oVirt 3.5.0.1-1.el6 If anyone has any questions please feel free to ask. *** *Mark Steele* CIO / VP Technical Operations | TelVue Corporation TelVue - We Share Your Vision 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 800.885.8886 x128 | msteele at telvue.com | http://www.telvue.com twitter: http://twitter.com/telvue | facebook: https://www.facebook.com/telvue On Tue, Feb 20, 2018 at 11:24 AM, Yaniv Kaul wrote: > > > On Tue, Feb 20, 2018 at 12:52 PM, Mark Steele wrote: > >> ?Is it possible that the HostedEngine became corrupted somehow and that >> is preventing us from adding hosts? >> > > I doubt that. > I still suspect the libvirt auth. issue. > Nevertheless, as commented more than once, you are running on somewhat old > version with a recent CentOS version. Not sure this combination is tested > or anyone's running it. > > >> >> Is creating a new hosted engine an option? >> > > You could backup and restore to a new HE. > Y. > > >> >> >> *** >> *Mark Steele* >> CIO / VP Technical Operations | TelVue Corporation >> TelVue - We Share Your Vision >> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >> >> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >> www.telvue.com >> twitter: http://twitter.com/telvue | facebook: https://www.facebook >> .com/telvue >> >> On Mon, Feb 19, 2018 at 9:55 AM, Mark Steele wrote: >> >>> At this point I'm wondering if there is anyone in the community that >>> freelances and would be willing to provide remote support to resolve this >>> issue? >>> >>> We are running with 1/2 our normal hosts, and not being able to add >>> anymore back into the cluster is a serious problem. >>> >>> Best regards, >>> >>> >>> *** >>> *Mark Steele* >>> CIO / VP Technical Operations | TelVue Corporation >>> TelVue - We Share Your Vision >>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>> >>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>> www.telvue.com >>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>> .com/telvue >>> >>> On Sat, Feb 17, 2018 at 12:53 PM, Mark Steele >>> wrote: >>> >>>> Yaniv, >>>> >>>> I have one of my developers assisting me and we are continuing to run >>>> into issues. This is a note from him: >>>> >>>> Hi, I'm trying to add a host to ovirt, but I'm running into package >>>> dependency problems. I have existing hosts that are working and integrated >>>> properly, and inspecting those, I am able to match the packages between the >>>> new host and the existing, but when I then try to add the new host to >>>> ovirt, it fails on reinstall because it's trying to install packages that >>>> are later versions. does the installation run list from ovirt-release35 >>>> 002-1 have unspecified versions? The working hosts use libvirt-1.1.1-29, >>>> and vdsm-4.16.7, but it's trying to install vdsm-4.16.30, which requires a >>>> higher version of libvirt, at which point, the installation fails. is there >>>> some way I can specify which package versions the ovirt install procedure >>>> uses? or better yet, skip the package management step entirely? >>>> >>>> >>>> *** >>>> *Mark Steele* >>>> CIO / VP Technical Operations | TelVue Corporation >>>> TelVue - We Share Your Vision >>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>> >>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>> www.telvue.com >>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>> .com/telvue >>>> >>>> On Sat, Feb 17, 2018 at 2:32 AM, Yaniv Kaul wrote: >>>> >>>>> >>>>> >>>>> On Fri, Feb 16, 2018 at 11:14 PM, Mark Steele >>>>> wrote: >>>>> >>>>>> We are using CentOS Linux release 7.0.1406 (Core) and oVirt Engine >>>>>> Version: 3.5.0.1-1.el6 >>>>>> >>>>> >>>>> You are seeing https://bugzilla.redhat.com/show_bug.cgi?id=1444426 , >>>>> which is a result of a default change of libvirt and was fixed in later >>>>> versions of oVirt than the one you are using. >>>>> See patch https://gerrit.ovirt.org/#/c/76934/ for how it was fixed, >>>>> you can probably configure it manually. >>>>> Y. >>>>> >>>>> >>>>>> >>>>>> We have four other hosts that are running this same configuration >>>>>> already. I took one host out of the cluster (forcefully) that was working >>>>>> and now it will not add back in either - throwing the same SASL error. >>>>>> >>>>>> We are looking at downgrading libvirt as I've seen that somewhere >>>>>> else - is there another version of RH I should be trying? I have a host I >>>>>> can put it on. >>>>>> >>>>>> >>>>>> >>>>>> *** >>>>>> *Mark Steele* >>>>>> CIO / VP Technical Operations | TelVue Corporation >>>>>> TelVue - We Share Your Vision >>>>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>>>> >>>>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>>>> www.telvue.com >>>>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>>>> .com/telvue >>>>>> >>>>>> On Fri, Feb 16, 2018 at 3:31 PM, Yaniv Kaul wrote: >>>>>> >>>>>>> >>>>>>> >>>>>>> On Feb 16, 2018 6:47 PM, "Mark Steele" wrote: >>>>>>> >>>>>>> Hello all, >>>>>>> >>>>>>> We recently had a network event where we lost access to our storage >>>>>>> for a period of time. The Cluster basically shut down all our VM's and in >>>>>>> the process we had three HV's that went offline and would not communicate >>>>>>> properly with the cluster. >>>>>>> >>>>>>> We have since completely reinstalled CentOS on the hosts and >>>>>>> attempted to install them into the cluster with no joy. We've gotten to the >>>>>>> point where we generally get an error message in the web gui: >>>>>>> >>>>>>> >>>>>>> Which EL release and which oVirt release are you using? My guess >>>>>>> would be latest EL, with an older oVirt? >>>>>>> Y. >>>>>>> >>>>>>> >>>>>>> Stage: Misc Configuration >>>>>>> Host hv-ausa-02 installation failed. Command returned failure code 1 >>>>>>> during SSH session 'root at 10.1.90.154'. >>>>>>> >>>>>>> the following is what we are seeing in the messages log: >>>>>>> >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>>>> authentication failed: authentication failed >>>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>>>> 15231: error : virNetSASLSessionListMechanisms:390 : internal >>>>>>> error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: >>>>>>> Internal Error -4 in server.c near line 1757) >>>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>>>> 15231: error : remoteDispatchAuthSaslInit:3411 : authentication >>>>>>> failed: authentication failed >>>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.761+0000: >>>>>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>>>>> Input/output error >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>>>> authentication failed: authentication failed >>>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.962+0000: >>>>>>> 15233: error : virNetSASLSessionListMechanisms:390 : internal >>>>>>> error: cannot list SASL mechanisms -4 (SASL(-4): no mechanism available: >>>>>>> Internal Error -4 in server.c near line 1757) >>>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>>>>> 15233: error : remoteDispatchAuthSaslInit:3411 : authentication >>>>>>> failed: authentication failed >>>>>>> Feb 16 11:39:53 hv-ausa-02 libvirtd: 2018-02-16 16:39:53.963+0000: >>>>>>> 15226: error : virNetSocketReadWire:1808 : End of file while reading data: >>>>>>> Input/output error >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirt: XML-RPC error : >>>>>>> authentication failed: authentication failed >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: Traceback (most recent call >>>>>>> last): >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File "/usr/bin/vdsm-tool", >>>>>>> line 219, in main >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>>>>>> tool_command[cmd]["command"](*args) >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>>> "/usr/lib/python2.7/site-packages/vdsm/tool/upgrade_300_networks.py", >>>>>>> line 83, in upgrade_networks >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: networks = netinfo.networks() >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>>> "/usr/lib/python2.7/site-packages/vdsm/netinfo.py", line 112, in >>>>>>> networks >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = libvirtconnection.get() >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>>>>> 159, in get >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: conn = _open_qemu_connection() >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>>> "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py", line >>>>>>> 95, in _open_qemu_connection >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return >>>>>>> utils.retry(libvirtOpen, timeout=10, sleep=0.2) >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>>> "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1108, in >>>>>>> retry >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: return func() >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: File >>>>>>> "/usr/lib64/python2.7/site-packages/libvirt.py", line 105, in >>>>>>> openAuth >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: if ret is None:raise >>>>>>> libvirtError('virConnectOpenAuth() failed') >>>>>>> Feb 16 11:39:53 hv-ausa-02 vdsm-tool: libvirtError: authentication >>>>>>> failed: authentication failed >>>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service: control >>>>>>> process exited, code=exited status=1 >>>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Failed to start Virtual Desktop >>>>>>> Server Manager network restoration. >>>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Dependency failed for Virtual >>>>>>> Desktop Server Manager. >>>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Job vdsmd.service/start failed >>>>>>> with result 'dependency'. >>>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: Unit vdsm-network.service >>>>>>> entered failed state. >>>>>>> Feb 16 11:39:53 hv-ausa-02 systemd: vdsm-network.service failed. >>>>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 10 of user root. >>>>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 10 of user root. >>>>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Started Session 11 of user root. >>>>>>> Feb 16 11:40:01 hv-ausa-02 systemd: Starting Session 11 of user root. >>>>>>> >>>>>>> Can someone point me in the right direction to resolve this - it >>>>>>> seems to be a SASL issue perhaps? >>>>>>> >>>>>>> *** >>>>>>> *Mark Steele* >>>>>>> CIO / VP Technical Operations | TelVue Corporation >>>>>>> TelVue - We Share Your Vision >>>>>>> 16000 Horizon Way, Suite 100 | Mt. Laurel, NJ 08054 >>>>>>> >>>>>>> 800.885.8886 x128 <(800)%20885-8886> | msteele at telvue.com | http:// >>>>>>> www.telvue.com >>>>>>> twitter: http://twitter.com/telvue | facebook: https://www.facebook >>>>>>> .com/telvue >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>> >>>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiri.slezka at slu.cz Thu Feb 22 13:26:06 2018 From: jiri.slezka at slu.cz (=?UTF-8?B?SmnFmcOtIFNsw6nFvmth?=) Date: Thu, 22 Feb 2018 14:26:06 +0100 Subject: [ovirt-users] problem importing ova vm In-Reply-To: <20180222125821.GH2787@redhat.com> References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> <20180222102250.GG2787@redhat.com> <20180222125821.GH2787@redhat.com> Message-ID: On 02/22/2018 01:58 PM, Richard W.M. Jones wrote: > On Thu, Feb 22, 2018 at 01:27:18PM +0100, Ji?? Sl??ka wrote: >> libvirt needs authentication to connect to libvirt URI qemu:///system >> (see also: http://libvirt.org/auth.html http://libvirt.org/uri.html) > > You can set the backend to direct to avoid needing libvirt: > > export LIBGUESTFS_BACKEND=direct > > Alternately you can fiddle with the libvirt polkit configuration > to permit access: thanks, here is full output http://mirror.slu.cz/tmp/libguestfs-test-tool.txt Jiri > > https://libvirt.org/aclpolkit.html > > Rich. > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3716 bytes Desc: S/MIME Cryptographic Signature URL: From ahadas at redhat.com Thu Feb 22 13:30:58 2018 From: ahadas at redhat.com (Arik Hadas) Date: Thu, 22 Feb 2018 15:30:58 +0200 Subject: [ovirt-users] problem importing ova vm In-Reply-To: References: <59b63d4f-947d-2730-3dc3-a5cc78b65fcf@slu.cz> <007db758-79fe-074a-fb93-edf087b968a9@slu.cz> <21a346fc-fdf2-e30a-10a6-1f133671b6eb@slu.cz> <20180222102250.GG2787@redhat.com> <20180222125821.GH2787@redhat.com> Message-ID: On Thu, Feb 22, 2018 at 3:26 PM, Ji?? Sl??ka wrote: > On 02/22/2018 01:58 PM, Richard W.M. Jones wrote: > > On Thu, Feb 22, 2018 at 01:27:18PM +0100, Ji?? Sl??ka wrote: > >> libvirt needs authentication to connect to libvirt URI qemu:///system > >> (see also: http://libvirt.org/auth.html http://libvirt.org/uri.html) > > > > You can set the backend to direct to avoid needing libvirt: > > > > export LIBGUESTFS_BACKEND=direct > > > > Alternately you can fiddle with the libvirt polkit configuration > > to permit access: > > thanks, here is full output > > http://mirror.slu.cz/tmp/libguestfs-test-tool.txt > > Jiri > Thanks, there is apparently something wrong with that particular host of mine - not worth spending the time on investigating it. Jiri, your test seems to past, could you try invoking the import again with the latest proposed changes to the OVF configuration (adding ovf:size to the File element and removing the USB item) and update us? > > > > > https://libvirt.org/aclpolkit.html > > > > Rich. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsoyer at systea.fr Thu Feb 22 14:22:57 2018 From: fsoyer at systea.fr (fsoyer) Date: Thu, 22 Feb 2018 15:22:57 +0100 Subject: [ovirt-users] =?utf-8?b?Pz09P3V0Zi04P3E/ICBWTXMgd2l0aCBtdWx0aXBs?= =?utf-8?q?e_vdisks_don=27t_migrate?= In-Reply-To: Message-ID: <4663-5a8ed280-61-50066700@129695947> Hi, Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger (192.168.0.6) migrated?(or failed to migrate...) to victor (192.168.0.5), while the engine.log in the first mail on 2018-02-12 was for VMs standing on victor, migrated (or failed to migrate...) to ginger. Symptoms were exactly the same, in both directions, and VMs works like a charm before, and even after (migration "killed" by a poweroff of VMs). Am I the only one experimenting this problem ? Thanks -- Cordialement, Frank Soyer ? Le Jeudi, F?vrier 22, 2018 00:45 CET, Maor Lipchuk a ?crit: ?Hi Frank,?Sorry about the delay repond.I've been going through the logs you attached, although I could not find any specific indication why the migration failed because of the disk you were mentionning.Does this VM run with both disks on the target host without migration??Regards,Maor??On Fri, Feb 16, 2018 at 11:03 AM, fsoyer wrote:Hi Maor, sorry for the double post, I've change the email adress of my account and supposed that I'd need to re-post it. And thank you for your time. Here are the logs. I added a vdisk to an existing VM : it no more migrates, needing to poweroff it after minutes. Then simply deleting the second disk makes migrate it in exactly 9s without problem !? https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d -- Cordialement, Frank Soyer?Le Mercredi, F?vrier 14, 2018 11:04 CET, Maor Lipchuk a ?crit: ?Hi Frank,?I already replied on your last email.Can you provide the VDSM logs from the time of the migration failure for both hosts:??ginger.local.systea.fr?and?victor.local.systea.fr?Thanks,Maor?On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: Hi all, I discovered yesterday a problem when migrating VM with more than one vdisk. On our test servers (oVirt4.1, shared storage with Gluster), I created 2 VMs needed for a test, from a template with a 20G vdisk. On this VMs I added a 100G vdisk (for this tests I didn't want to waste time to extend the existing vdisks... But I lost time finally...). The VMs with the 2 vdisks works well. Now I saw some updates waiting on the host. I tried to put it in maintenance... But it stopped on the two VM. They were marked "migrating", but no more accessible. Other (small) VMs with only 1 vdisk was migrated without problem at the same time. I saw that a kvm process for the (big) VMs was launched on the source AND destination host, but after tens of minutes, the migration and the VMs was always freezed. I tried to cancel the migration for the VMs : failed. The only way to stop it was to poweroff the VMs : the kvm process died on the 2 hosts and the GUI alerted on a failed migration. In doubt, I tried to delete the second vdisk on one of this VMs : it migrates then without error ! And no access problem. I tried to extend the first vdisk of the second VM, the delete the second vdisk : it migrates now without problem !??? So after another test with a VM with 2 vdisks, I can say that this blocked the migration process :( In engine.log, for a VMs with 1 vdisk migrating well, we see :2018-02-12 16:46:29,705+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:29,955+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] Running command: MigrateVmToServerCommand internal: false. Entities affected : ?ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:46:30,261+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 14f61ee0 2018-02-12 16:46:30,262+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] START, MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost='192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 775cd381 2018-02-12 16:46:30,277+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, log id: 775cd381 2018-02-12 16:46:30,285+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 2018-02-12 16:46:30,301+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, User: admin at internal-authz). 2018-02-12 16:46:31,106+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] START, FullListVDSCommand(HostName = victor.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 2018-02-12 16:46:31,147+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 2018-02-12 16:46:31,150+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' 2018-02-12 16:46:31,151+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:31,151+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) [54a65b66] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:31,152+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) was unexpectedly detected as 'MigratingTo' on VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') 2018-02-12 16:46:31,152+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) ignoring it in the refresh until migration is done .... 2018-02-12 16:46:41,631+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) 2018-02-12 16:46:41,632+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, DestroyVDSCommand(HostName = victor.local.systea.fr, DestroyVmVDSCommandParameters:{runAsync='true', hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log id: 560eca57 2018-02-12 16:46:41,650+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, DestroyVDSCommand, log id: 560eca57 2018-02-12 16:46:41,650+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingFrom' --> 'Down' 2018-02-12 16:46:41,651+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status 'MigratingTo' 2018-02-12 16:46:42,163+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) moved from 'MigratingTo' --> 'Up' 2018-02-12 16:46:42,169+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, MigrateStatusVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 2018-02-12 16:46:42,174+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, MigrateStatusVDSCommand, log id: 7a25c281 2018-02-12 16:46:42,194+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration completed (VM: Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual downtime: (N/A)) 2018-02-12 16:46:42,201+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (ForkJoinPool-1-worker-4) [] Lock freed to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', sharedLocks=''}' 2018-02-12 16:46:42,203+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, FullListVDSCommand(HostName = ginger.local.systea.fr, FullListVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 2018-02-12 16:46:42,254+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@760085fd, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600, display=vnc}], log id: 7cc65298 2018-02-12 16:46:42,257+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:42,257+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} 2018-02-12 16:46:46,260+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] FINISH, FullListVDSCommand, return: [{acpiEnable=true, emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Object;@77951faf, custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='channel0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4-4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='017b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', address='{bus=0, type=usb, port=1}', managed='false', plugged='true', readOnly='false', deviceAlias='input0', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='VmDeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}, device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4aa6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, controller=0, type=virtio-serial, port=2}', managed='false', plugged='true', readOnly='false', deviceAlias='channel1', customProperties='[]', snapshotId='null', logicalName='null', hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, maxMemSlots=16, kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620, display=vnc}], log id: 58cdef4c 2018-02-12 16:46:46,267+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a vnc Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, port=5901} 2018-02-12 16:46:46,268+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) [7fcb200a] Received a lease Device without an address when processing VM 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6:_DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} ? For the VM with 2 vdisks we see : 2018-02-12 16:49:06,112+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', sharedLocks=''}' 2018-02-12 16:49:06,407+01 INFO ?[org.ovirt.engine.core.bll.MigrateVmToServerCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Running command: MigrateVmToServerCommand internal: false. Entities affected : ?ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction group MIGRATE_VM with role type USER 2018-02-12 16:49:06,712+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 3702a9e0 2018-02-12 16:49:06,713+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, MigrateVDSCommandParameters:{runAsync='true', hostId='d569c2dd-8f30-4878-8aea-858db285cf69', vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost='192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', migrationDowntime='0', autoConverge='true', migrateCompressed='false', consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', maxIncomingMigrations='2', maxOutgoingMigrations='2', convergenceSchedule='[init=[{name=setDowntime, params=[100]}], stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, action={name=setDowntime, params=[200]}}, {limit=3, action={name=setDowntime, params=[300]}}, {limit=4, action={name=setDowntime, params=[400]}}, {limit=6, action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, params=[]}}]]'}), log id: 1840069c 2018-02-12 16:49:06,724+01 INFO ?[org.ovirt.engine.core.vdsbroker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, log id: 1840069c 2018-02-12 16:49:06,732+01 INFO ?[org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 2018-02-12 16:49:06,753+01 INFO ?[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Migration started (VM: Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: victor.local.systea.fr, User: admin at internal-authz). ... 2018-02-12 16:49:16,453+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' 2018-02-12 16:49:16,455+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:16,455+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ... 2018-02-12 16:49:31,484+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) was unexpectedly detected as 'MigratingTo' on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') 2018-02-12 16:49:31,484+01 INFO ?[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) ignoring it in the refresh until migration is done ? and so on, last lines repeated indefinitly for hours since we poweroff the VM... Is this something known ? Any idea about that ? Thanks Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. -- Cordialement, Frank Soyer _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users ? ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From poltsi at poltsi.fi Thu Feb 22 15:15:27 2018 From: poltsi at poltsi.fi (=?UTF-8?Q?Paul-Erik_T=C3=B6rr=C3=B6nen?=) Date: Thu, 22 Feb 2018 17:15:27 +0200 Subject: [ovirt-users] Confused by logical networks Message-ID: <0a60e95142e6577b4e210ed814d495c7@webmail.poltsi.fi> I'm not sure how the logical networks should work and would appreciate if someone could shed some light into the matter, I've tried reading the documentation? but have not become any wiser :-/ For the sake of argument, I have two hosts in the same cluster/DC, They both have 2 network devices each (let's call them eth0 and eth1). On both hosts the ovirtmgmgt is connected to eth0 and uses the 10.0.0.0/8-network. Host 1 is 10.0.0.1 and host 2 is 10.0.0.2. All four network devices are connected to one switch. Then I create a logical network, mylogic which should be 192.168.1.0/24, which I assign to eth1 on each host, but define only for host 1 an ip-address, 192.168.1.1, host 2 has also the network assigned to eth1, but withouth an ip address. Next I create vm1 on host 1, give it a single virtual network connection to mylogic, and configure the guest to use 192.168.1.2 with gw 192.168.1.1. Obviously I can from the guest ping 192.168.1.1 which is the host address on the logical network as the guest is running on the same hardware where the host ip address is defined. However, and this is where my confusion lies, if I now create another vm, vm2, on host 2, attach its network device to the mylogic network and configure it to use 192.168.1.3 with gw 192.168.1.1, I can not ping neither 192.168.1.1 nor 192.168.1.2. My understanding is that vm2 should be able to ping the wm1 as well as the gateway address defined on host 1. However this does not seem to be the case. What have I missed here? TIA, Poltsi ? https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/ From amureini at redhat.com Thu Feb 22 15:44:05 2018 From: amureini at redhat.com (Allon Mureinik) Date: Thu, 22 Feb 2018 17:44:05 +0200 Subject: [ovirt-users] Let's get oVirt in Stack Overflow's Open Source Advertising again Message-ID: Hi all, With Eldan's help, we have an advert for oVirt [1] suggested on Stack Overflow's traditional Open Source Advertising campaign [2]. Like always, we need the upvotes to get it up there. Want to see oVirt on Stack Overflow's sidebar again? Hope over and upvote. -Allon [1] https://meta.stackoverflow.com/a/363689/2422776 [2] https://meta.stackoverflow.com/q/362773/2422776 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuriku at shurik.kiev.ua Thu Feb 22 15:45:28 2018 From: shuriku at shurik.kiev.ua (Alexandr Krivulya) Date: Thu, 22 Feb 2018 17:45:28 +0200 Subject: [ovirt-users] oVirt 4.2: hostdev passthrough not working any more In-Reply-To: References: <3b229ed8-23c9-010e-0d2f-41bf66653933@shurik.kiev.ua> <20180222093008.GA8924@Alexandra.local> Message-ID: <26581562-7e41-35c3-5efc-5edebd841560@shurik.kiev.ua> 22.02.2018 11:37, Alexandr Krivulya ?????: > 22.02.2018 11:30, Martin Polednik ?????: >> On 22/02/18 10:40 +0200, Alexandr Krivulya wrote: >>> Hello, the same problem after upgrade to 4.2.1 :( >>> >>> >>> 18.01.2018 11:53, Daniel Helgenberger ?????: >>>> Hello, >>>> >>>> yesterday I upgraded to 4.2.0 from 4.1.8. >>>> >>>> Now I notice I cannot assign host dev pass though any more; in the GUI >>>> the 'Pinnded to host' list is empty; I cannot select any host for pass >>>> through host pinning. >> >> Does the host that previously worked report device passthrough >> capability? In the UI it's the "Device Passthrough: Enabled" field. >> (and similarly named field in vdsm-client getCapabilities call) > Now "Device Passthrough: Disabled", but usb passthrough works well on 4.1 After enabling intel_iommu usb passthrough works again. But why this option needed for usb now? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mlipchuk at redhat.com Thu Feb 22 15:56:21 2018 From: mlipchuk at redhat.com (Maor Lipchuk) Date: Thu, 22 Feb 2018 17:56:21 +0200 Subject: [ovirt-users] VMs with multiple vdisks don't migrate In-Reply-To: <4663-5a8ed280-61-50066700@129695947> References: <4663-5a8ed280-61-50066700@129695947> Message-ID: I encountered a bug (see [1]) which contains the same error mentioned in your VDSM logs (see [2]), but I doubt it is related. Milan, maybe you have any advice to troubleshoot the issue? Will the libvirt/qemu logs can help? I would suggest to open a bug on that issue so we can track it more properly. Regards, Maor [1] https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to VM running on 2 Hosts [2] 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] Internal server error (__init__:577) Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, in _handle_request res = method(**params) File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in _dynamicMethod result = fn(*methodArgs) File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies 'current_values': v.getIoTune()} File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune result = self.getIoTuneResponse() File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse res = self._dom.blockIoTune( File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, in __getattr__ % self.vmid) NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not started yet or was shut down On Thu, Feb 22, 2018 at 4:22 PM, fsoyer wrote: > Hi, > Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger > (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), > while the engine.log in the first mail on 2018-02-12 was for VMs standing > on victor, migrated (or failed to migrate...) to ginger. Symptoms were > exactly the same, in both directions, and VMs works like a charm before, > and even after (migration "killed" by a poweroff of VMs). > Am I the only one experimenting this problem ? > > > Thanks > -- > > Cordialement, > > *Frank Soyer * > > > > Le Jeudi, F?vrier 22, 2018 00:45 CET, Maor Lipchuk > a ?crit: > > > Hi Frank, > > Sorry about the delay repond. > I've been going through the logs you attached, although I could not find > any specific indication why the migration failed because of the disk you > were mentionning. > Does this VM run with both disks on the target host without migration? > > Regards, > Maor > > > On Fri, Feb 16, 2018 at 11:03 AM, fsoyer wrote: >> >> Hi Maor, >> sorry for the double post, I've change the email adress of my account and >> supposed that I'd need to re-post it. >> And thank you for your time. Here are the logs. I added a vdisk to an >> existing VM : it no more migrates, needing to poweroff it after minutes. >> Then simply deleting the second disk makes migrate it in exactly 9s without >> problem ! >> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 >> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d >> >> -- >> >> Cordialement, >> >> *Frank Soyer * >> Le Mercredi, F?vrier 14, 2018 11:04 CET, Maor Lipchuk < >> mlipchuk at redhat.com> a ?crit: >> >> >> Hi Frank, >> >> I already replied on your last email. >> Can you provide the VDSM logs from the time of the migration failure for >> both hosts: >> ginger.local.systea.f r and v >> ictor.local.systea.fr >> >> Thanks, >> Maor >> >> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: >>> >>> Hi all, >>> I discovered yesterday a problem when migrating VM with more than one >>> vdisk. >>> On our test servers (oVirt4.1, shared storage with Gluster), I created 2 >>> VMs needed for a test, from a template with a 20G vdisk. On this VMs I >>> added a 100G vdisk (for this tests I didn't want to waste time to extend >>> the existing vdisks... But I lost time finally...). The VMs with the 2 >>> vdisks works well. >>> Now I saw some updates waiting on the host. I tried to put it in >>> maintenance... But it stopped on the two VM. They were marked "migrating", >>> but no more accessible. Other (small) VMs with only 1 vdisk was migrated >>> without problem at the same time. >>> I saw that a kvm process for the (big) VMs was launched on the source >>> AND destination host, but after tens of minutes, the migration and the VMs >>> was always freezed. I tried to cancel the migration for the VMs : failed. >>> The only way to stop it was to poweroff the VMs : the kvm process died on >>> the 2 hosts and the GUI alerted on a failed migration. >>> In doubt, I tried to delete the second vdisk on one of this VMs : it >>> migrates then without error ! And no access problem. >>> I tried to extend the first vdisk of the second VM, the delete the >>> second vdisk : it migrates now without problem ! >>> >>> So after another test with a VM with 2 vdisks, I can say that this >>> blocked the migration process :( >>> >>> In engine.log, for a VMs with 1 vdisk migrating well, we see : >>> >>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired >>> to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>> sharedLocks=''}' >>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>> Running command: MigrateVmToServerCommand internal: false. Entities >>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction >>> group MIGRATE_VM with role type USER >>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>> action={name=setDowntime, params=[200]}}, {limit=3, >>> action={name=setDowntime, params=[300]}}, {limit=4, >>> action={name=setDowntime, params=[400]}}, {limit=6, >>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>> params=[]}}]]'}), log id: 14f61ee0 >>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>> [2f712024-5982-46a8-82c8-fd8293da5725] START, >>> MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, >>> MigrateVDSCommandParameters:{runAsync='true', >>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>> action={name=setDowntime, params=[200]}}, {limit=3, >>> action={name=setDowntime, params=[300]}}, {limit=4, >>> action={name=setDowntime, params=[400]}}, {limit=6, >>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>> params=[]}}]]'}), log id: 775cd381 >>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, >>> log id: 775cd381 >>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 >>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db >>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) >>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: >>> VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, >>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom >>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>> ginger.local.systea.fr, User: admin at internal-authz). >>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>> START, FullListVDSCommand(HostName = victor.local.systea.fr, >>> FullListVDSCommandParameters:{runAsync='true', >>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 >>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>> emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, >>> guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>> timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, >>> guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, >>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>> controller=0, type=virtio-serial, port=1}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='channel0', >>> customProperties='[]', snapshotId='null', logicalName='null', >>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>> readOnly='false', deviceAlias='input0', customProperties='[]', >>> snapshotId='null', logicalName='null', hostDevice='null'}, >>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>> snapshotId='null', logicalName='null', hostDevice='null'}, >>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>> controller=0, type=virtio-serial, port=2}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='channel1', >>> customProperties='[]', snapshotId='null', logicalName='null', >>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>> vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, >>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>> numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, >>> kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, >>> devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, >>> clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 >>> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) >>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' >>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>> [54a65b66] Received a vnc Device without an address when processing VM >>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>> displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>> port=5901} >>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>> [54a65b66] Received a lease Device without an address when processing VM >>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>> was unexpectedly detected as 'MigratingTo' on VDS >>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) >>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') >>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( >>> ginger.local.systea.fr) ignoring it in the refresh until migration is >>> done >>> .... >>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>> victor.local.systea.fr) >>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, >>> DestroyVDSCommand(HostName = victor.local.systea.fr, >>> DestroyVmVDSCommandParameters:{runAsync='true', >>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', >>> secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log >>> id: 560eca57 >>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, >>> DestroyVDSCommand, log id: 560eca57 >>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>> moved from 'MigratingFrom' --> 'Down' >>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status >>> 'MigratingTo' >>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>> moved from 'MigratingTo' --> 'Up' >>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>> START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, >>> MigrateStatusVDSCommandParameters:{runAsync='true', >>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 >>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281 >>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db >>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] >>> EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: >>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: >>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: >>> null, Custom Event ID: -1, Message: Migration completed (VM: >>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual >>> downtime: (N/A)) >>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>> (ForkJoinPool-1-worker-4) [] Lock freed to object >>> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>> sharedLocks=''}' >>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, >>> FullListVDSCommand(HostName = ginger.local.systea.fr, >>> FullListVDSCommandParameters:{runAsync='true', >>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 >>> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, >>> FullListVDSCommand, return: [{acpiEnable=true, >>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>> tabletEnable=true, pid=18748, guestDiskMapping={}, >>> transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, >>> guestNumaNodes=[Ljava.lang.Object;@760085fd, >>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>> controller=0, type=virtio-serial, port=1}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='channel0', >>> customProperties='[]', snapshotId='null', logicalName='null', >>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>> readOnly='false', deviceAlias='input0', customProperties='[]', >>> snapshotId='null', logicalName='null', hostDevice='null'}, >>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>> snapshotId='null', logicalName='null', hostDevice='null'}, >>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>> controller=0, type=virtio-serial, port=2}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='channel1', >>> customProperties='[]', snapshotId='null', logicalName='null', >>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, >>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 >>> <(430)%20425-9600>, display=vnc}], log id: 7cc65298 >>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>> Received a vnc Device without an address when processing VM >>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>> port=5901} >>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>> Received a lease Device without an address when processing VM >>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] >>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>> tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H >>> ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>> timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj >>> ect;@77951faf, custom={device_fbddd528-7d93-4 >>> 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc >>> c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>> controller=0, type=virtio-serial, port=1}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='channel0', >>> customProperties='[]', snapshotId='null', logicalName='null', >>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>> readOnly='false', deviceAlias='input0', customProperties='[]', >>> snapshotId='null', logicalName='null', hostDevice='null'}, >>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>> snapshotId='null', logicalName='null', hostDevice='null'}, >>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>> controller=0, type=virtio-serial, port=2}', managed='false', >>> plugged='true', readOnly='false', deviceAlias='channel1', >>> customProperties='[]', snapshotId='null', logicalName='null', >>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, >>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 >>> <(430)%20426-3620>, display=vnc}], log id: 58cdef4c >>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>> [7fcb200a] Received a vnc Device without an address when processing VM >>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>> port=5901} >>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>> [7fcb200a] Received a lease Device without an address when processing VM >>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>> >>> >>> >>> >>> For the VM with 2 vdisks we see : >>> >>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired >>> to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', >>> sharedLocks=''}' >>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>> Running command: MigrateVmToServerCommand internal: false. Entities >>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction >>> group MIGRATE_VM with role type USER >>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>> action={name=setDowntime, params=[200]}}, {limit=3, >>> action={name=setDowntime, params=[300]}}, {limit=4, >>> action={name=setDowntime, params=[400]}}, {limit=6, >>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>> params=[]}}]]'}), log id: 3702a9e0 >>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, >>> MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, >>> MigrateVDSCommandParameters:{runAsync='true', >>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>> action={name=setDowntime, params=[200]}}, {limit=3, >>> action={name=setDowntime, params=[300]}}, {limit=4, >>> action={name=setDowntime, params=[400]}}, {limit=6, >>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>> params=[]}}]]'}), log id: 1840069c >>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, >>> log id: 1840069c >>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 >>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db >>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) >>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: >>> VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, >>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom >>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>> Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: >>> victor.local.systea.fr, User: admin at internal-authz). >>> ... >>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro >>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) >>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' >>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>> was unexpectedly detected as 'MigratingTo' on VDS >>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>> victor.local.systea.fr) ignoring it in the refresh until migration is >>> done >>> ... >>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>> was unexpectedly detected as 'MigratingTo' on VDS >>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>> victor.local.systea.fr) ignoring it in the refresh until migration is >>> done >>> >>> >>> >>> and so on, last lines repeated indefinitly for hours since we poweroff >>> the VM... >>> Is this something known ? Any idea about that ? >>> >>> Thanks >>> >>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. >>> >>> -- >>> >>> Cordialement, >>> >>> *Frank Soyer * >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >> >> >> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From erekle.magradze at recogizer.de Thu Feb 22 16:05:37 2018 From: erekle.magradze at recogizer.de (Erekle Magradze) Date: Thu, 22 Feb 2018 17:05:37 +0100 Subject: [ovirt-users] rebooting hypervisors from time to time Message-ID: Hello there, I am facing the following problem from time to time one of the hypervisor (there are 3 of them)s is rebooting, I am using ovirt-release42-4.2.1-1.el7.centos.noarch and glsuter as a storage backend (glusterfs-3.12.5-2.el7.x86_64). I am suspecting gluster because of the e.g. message bellow from one of the volumes, Could you please help and suggest to which direction should investigation go? Thanks in advance Cheers Erekle [2018-02-22 15:36:10.011687] and [2018-02-22 15:37:10.955013] [2018-02-22 15:41:10.198701] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 [2018-02-22 15:41:10.198704] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 [2018-02-22 15:42:11.293608] I [MSGID: 109063] [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). Holes=1 overlaps=0 [2018-02-22 15:53:16.245720] I [MSGID: 100030] [glusterfsd.c:2524:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.12.5 (args: /usr/sbin/glusterfs --volfile-server=10.0.0.21 --volfi le-server=10.0.0.22 --volfile-server=10.0.0.23 --volfile-id=/virtimages /rhev/data-center/mnt/glusterSD/10.0.0.21:_virtimages) [2018-02-22 15:53:16.263712] W [MSGID: 101002] [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' is deprecated, preferred is 'transport.address-family', continuing with correction [2018-02-22 15:53:16.269595] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1 [2018-02-22 15:53:16.273483] I [MSGID: 101190] [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread with index 2 [2018-02-22 15:53:16.273594] W [MSGID: 101174] [graph.c:363:_log_if_unknown_option] 0-virtimages-readdir-ahead: option 'parallel-readdir' is not recognized [2018-02-22 15:53:16.273703] I [MSGID: 114020] [client.c:2360:notify] 0-virtimages-client-0: parent translators are ready, attempting connect on transport [2018-02-22 15:53:16.276455] I [MSGID: 114020] [client.c:2360:notify] 0-virtimages-client-1: parent translators are ready, attempting connect on transport [2018-02-22 15:53:16.276683] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-virtimages-client-0: changing port to 49152 (from 0) [2018-02-22 15:53:16.279191] I [MSGID: 114020] [client.c:2360:notify] 0-virtimages-client-2: parent translators are ready, attempting connect on transport [2018-02-22 15:53:16.282126] I [MSGID: 114057] [client-handshake.c:1478:select_server_supported_programs] 0-virtimages-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2018-02-22 15:53:16.282573] I [MSGID: 114046] [client-handshake.c:1231:client_setvolume_cbk] 0-virtimages-client-0: Connected to virtimages-client-0, attached to remote volume '/mnt/virtimages/virtimgs'. [2018-02-22 15:53:16.282584] I [MSGID: 114047] [client-handshake.c:1242:client_setvolume_cbk] 0-virtimages-client-0: Server and Client lk-version numbers are not same, reopening the fds [2018-02-22 15:53:16.282665] I [MSGID: 108005] [afr-common.c:4929:__afr_handle_child_up_event] 0-virtimages-replicate-0: Subvolume 'virtimages-client-0' came back up; going online. [2018-02-22 15:53:16.282877] I [rpc-clnt.c:1986:rpc_clnt_reconfig] 0-virtimages-client-1: changing port to 49152 (from 0) [2018-02-22 15:53:16.282934] I [MSGID: 114035] [client-handshake.c:202:client_set_lk_version_cbk] 0-virtimages-client-0: Server lk version = 1 From michal.skrivanek at redhat.com Thu Feb 22 16:54:35 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 22 Feb 2018 17:54:35 +0100 Subject: [ovirt-users] oVirt 4.2: hostdev passthrough not working any more In-Reply-To: <26581562-7e41-35c3-5efc-5edebd841560@shurik.kiev.ua> References: <3b229ed8-23c9-010e-0d2f-41bf66653933@shurik.kiev.ua> <20180222093008.GA8924@Alexandra.local> <26581562-7e41-35c3-5efc-5edebd841560@shurik.kiev.ua> Message-ID: <16312417-9D9B-442E-A9E6-A478CBD8A44F@redhat.com> > On 22 Feb 2018, at 16:45, Alexandr Krivulya wrote: > > 22.02.2018 11:37, Alexandr Krivulya ?????: >> >> 22.02.2018 11:30, Martin Polednik ?????: >>> On 22/02/18 10:40 +0200, Alexandr Krivulya wrote: >>>> Hello, the same problem after upgrade to 4.2.1 :( >>>> >>>> >>>> 18.01.2018 11:53, Daniel Helgenberger ?????: >>>>> Hello, >>>>> >>>>> yesterday I upgraded to 4.2.0 from 4.1.8. >>>>> >>>>> Now I notice I cannot assign host dev pass though any more; in the GUI >>>>> the 'Pinnded to host' list is empty; I cannot select any host for pass >>>>> through host pinning. >>> >>> Does the host that previously worked report device passthrough >>> capability? In the UI it's the "Device Passthrough: Enabled" field. >>> (and similarly named field in vdsm-client getCapabilities call) >> Now "Device Passthrough: Disabled", but usb passthrough works well on 4.1 > > After enabling intel_iommu usb passthrough works again. But why this option needed for usb now? it should not be sounds like a regression can you please open a bug on that and add logs? Thanks, michal > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From lists at bootc.boo.tc Thu Feb 22 17:15:46 2018 From: lists at bootc.boo.tc (Chris Boot) Date: Thu, 22 Feb 2018 17:15:46 +0000 Subject: [ovirt-users] Change management network Message-ID: Hi all, I have an oVirt cluster on which I need to change which VLAN is the management network. The new management network is an existing VM network. I've configured IP addresses for all the hosts on this network, and I've even moved the HostedEngine VM onto this network. So far so good. What I cannot seem to be able to do is actually change the "management network" toggle in the cluster to this network: the oVirt Engine complains saying: "Error while executing action: Cannot edit Network. Changing management network in a non-empty cluster is not allowed." How can I get around this? I clearly cannot empty the cluster, as the cluster contains all my existing VMs, hosts and HostedEngine. Best regards, Chris -- Chris Boot bootc at boo.tc From artem.tambovskiy at gmail.com Thu Feb 22 19:21:04 2018 From: artem.tambovskiy at gmail.com (Artem Tambovskiy) Date: Thu, 22 Feb 2018 22:21:04 +0300 Subject: [ovirt-users] Question about sanlock lockspaces Message-ID: Hello, I'm still troubleshooting my cluster and trying to figure out which lockspaces should be present and which shouldn't. If HE VM is not running both ovirt-ha-agent and ovirt-ha-broker are down and storage disconnected by hosted-engine --disconnect-storage should I see something related to HE storage domain in sanlock client status output? For some reason on one host I don't see anything and the second one still reports about present lockspace for HE storage domain. Is this normal? [root at ovirt1 ~]# sanlock client status daemon b1d7fea2-e8a9-4645-b449-97702fc3808e.ovirt1.tel p -1 helper p -1 listener p -1 status p 3763 p 62861 quaggaVM p 63111 powerDNS p 107818 pjsip_freepbx_14 p 109092 revizorro_dev p 109589 routerVM s a40cc3a9-54d6-40fd-acee-525ef29c8ce3:2:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_data/a40cc3a9-54d6-40fd-acee-525ef29c8ce3/dom_md/ids:0 s 4a7f8717-9bb0-4d80-8016-498fa4b88162:1:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_engine/4a7f8717-9bb0-4d80-8016-498fa4b88162/dom_md/ids:0 r a40cc3a9-54d6-40fd-acee-525ef29c8ce3:SDM:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_data/a40cc3a9-54d6-40fd-acee-525ef29c8ce3/dom_md/leases:1048576:49 p 3763 As it looks to me lockspace 4a7f8717-9bb0-4d80-8016-498fa4b88162:1:/rhev/data-center/mnt/glusterSD/ ovirt2.telia.ru\:_engine/4a7f8717-9bb0-4d80-8016-498fa4b88162/dom_md/ids:0 shouldn't be present, and it doesn't match to the host_id, but may be I'm wrong here... Regards, Artem -------------- next part -------------- An HTML attachment was scrubbed... URL: From arsene.gschwind at unibas.ch Thu Feb 22 19:23:20 2018 From: arsene.gschwind at unibas.ch (=?UTF-8?Q?Ars=c3=a8ne_Gschwind?=) Date: Thu, 22 Feb 2018 20:23:20 +0100 Subject: [ovirt-users] Upgrade Cluster Compat Level from 4.1 to 4.2 Message-ID: <619b568d-6f4e-32c3-b516-2d6dc6a8914d@unibas.ch> Hi, I could successfully upgrade our Ovirt Environment from 4.1.9 to 4.2.1, really great job with the new interface. Everything runs well so far, the only problem I have is when trying to Upgrade the Cluster Compatibility Level from 4.1 to 4.2 it throw an error : Error while executing action: Update of cluster compatibility version failed because there are VMs/Templates [spfy-tscon] with incorrect configuration. To fix the issue, please go to each of them, edit and press OK. If the save does not pass, fix the dialog validation. This VM is a Windows Server 2016 System with a custom property using smbios hook, could that be the problem? The engine log don't help a lot: /2018-02-22 18:49:42,026+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-57) [595850d5] EVENT_ID: CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update compatibility version of Vm/Template: [spfy-tscon], Message: [No Message]// / Are there some other place to investigate and get some more information about this error? Thanks a lot for any Hint/Help. rgds, Arsene -- *Ars?ne Gschwind* Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland Tel. +41 79 449 25 63? | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.skrivanek at redhat.com Thu Feb 22 20:42:05 2018 From: michal.skrivanek at redhat.com (Michal Skrivanek) Date: Thu, 22 Feb 2018 21:42:05 +0100 Subject: [ovirt-users] Upgrade Cluster Compat Level from 4.1 to 4.2 In-Reply-To: <619b568d-6f4e-32c3-b516-2d6dc6a8914d@unibas.ch> References: <619b568d-6f4e-32c3-b516-2d6dc6a8914d@unibas.ch> Message-ID: > On 22 Feb 2018, at 20:23, Ars?ne Gschwind wrote: > > Hi, > > I could successfully upgrade our Ovirt Environment from 4.1.9 to 4.2.1, really great job with the new interface. > Everything runs well so far, the only problem I have is when trying to Upgrade the Cluster Compatibility Level from 4.1 to 4.2 it throw an error : > > Error while executing action: Update of cluster compatibility version failed because there are VMs/Templates [spfy-tscon] with incorrect configuration. To fix the issue, please go to each of them, edit and press OK. If the save does not pass, fix the dialog validation. > > This VM is a Windows Server 2016 System with a custom property using smbios hook, could that be the problem? > Hi, if it?s a user defined custom property you need to define it for the new cluster level too, similarly as you did originally Thanks, michal > The engine log don't help a lot: > > 2018-02-22 18:49:42,026+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-57) [595850d5] EVENT_ID: CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update compatibility version of Vm/Template: [spfy-tscon], Message: [No Message] > > Are there some other place to investigate and get some more information about this error? > Thanks a lot for any Hint/Help. > > rgds, > Arsene > -- > Ars?ne Gschwind > Fa. Sapify AG im Auftrag der Universit?t Basel > IT Services > Klingelbergstr. 70 | CH-4056 Basel | Switzerland > Tel. +41 79 449 25 63 | http://its.unibas.ch > ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From erekle.magradze at recogizer.de Thu Feb 22 20:48:06 2018 From: erekle.magradze at recogizer.de (Erekle Magradze) Date: Thu, 22 Feb 2018 21:48:06 +0100 Subject: [ovirt-users] rebooting hypervisors from time to time In-Reply-To: References: Message-ID: Dear all, It would be great if someone will share any experience regarding the similar case, would be great to have a hint where to start investigation. Thanks again Cheers Erekle On 02/22/2018 05:05 PM, Erekle Magradze wrote: > Hello there, > > I am facing the following problem from time to time one of the > hypervisor (there are 3 of them)s is rebooting, I am using > ovirt-release42-4.2.1-1.el7.centos.noarch and glsuter as a storage > backend (glusterfs-3.12.5-2.el7.x86_64). > > I am suspecting gluster because of the e.g. message bellow from one of > the volumes, > > Could you please help and suggest to which direction should > investigation go? > > Thanks in advance > > Cheers > > Erekle > > > [2018-02-22 15:36:10.011687] and [2018-02-22 15:37:10.955013] > [2018-02-22 15:41:10.198701] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:41:10.198704] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:42:11.293608] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:53:16.245720] I [MSGID: 100030] > [glusterfsd.c:2524:main] 0-/usr/sbin/glusterfs: Started running > /usr/sbin/glusterfs version 3.12.5 (args: /usr/sbin/glusterfs > --volfile-server=10.0.0.21 --volfi > le-server=10.0.0.22 --volfile-server=10.0.0.23 > --volfile-id=/virtimages > /rhev/data-center/mnt/glusterSD/10.0.0.21:_virtimages) > [2018-02-22 15:53:16.263712] W [MSGID: 101002] > [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' > is deprecated, preferred is 'transport.address-family', continuing > with correction > [2018-02-22 15:53:16.269595] I [MSGID: 101190] > [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started > thread with index 1 > [2018-02-22 15:53:16.273483] I [MSGID: 101190] > [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started > thread with index 2 > [2018-02-22 15:53:16.273594] W [MSGID: 101174] > [graph.c:363:_log_if_unknown_option] 0-virtimages-readdir-ahead: > option 'parallel-readdir' is not recognized > [2018-02-22 15:53:16.273703] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-0: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.276455] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-1: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.276683] I [rpc-clnt.c:1986:rpc_clnt_reconfig] > 0-virtimages-client-0: changing port to 49152 (from 0) > [2018-02-22 15:53:16.279191] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-2: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.282126] I [MSGID: 114057] > [client-handshake.c:1478:select_server_supported_programs] > 0-virtimages-client-0: Using Program GlusterFS 3.3, Num (1298437), > Version (330) > [2018-02-22 15:53:16.282573] I [MSGID: 114046] > [client-handshake.c:1231:client_setvolume_cbk] 0-virtimages-client-0: > Connected to virtimages-client-0, attached to remote volume > '/mnt/virtimages/virtimgs'. > [2018-02-22 15:53:16.282584] I [MSGID: 114047] > [client-handshake.c:1242:client_setvolume_cbk] 0-virtimages-client-0: > Server and Client lk-version numbers are not same, reopening the fds > [2018-02-22 15:53:16.282665] I [MSGID: 108005] > [afr-common.c:4929:__afr_handle_child_up_event] > 0-virtimages-replicate-0: Subvolume 'virtimages-client-0' came back > up; going online. > [2018-02-22 15:53:16.282877] I [rpc-clnt.c:1986:rpc_clnt_reconfig] > 0-virtimages-client-1: changing port to 49152 (from 0) > [2018-02-22 15:53:16.282934] I [MSGID: 114035] > [client-handshake.c:202:client_set_lk_version_cbk] > 0-virtimages-client-0: Server lk version = 1 > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Recogizer Group GmbH Dr.rer.nat. Erekle Magradze Lead Big Data Engineering & DevOps Rheinwerkallee 2, 53227 Bonn Tel: +49 228 29974555 E-Mail erekle.magradze at recogizer.de recogizer.com ----------------------------------------------------------------- Recogizer Group GmbH Gesch?ftsf?hrer: Oliver Habisch, Carsten Kreutze Handelsregister: Amtsgericht Bonn HRB 20724 Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993 Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und l?schen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der darin enthaltenen Informationen ist nicht gestattet. From vincent at epicenergy.ca Thu Feb 22 20:56:09 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Thu, 22 Feb 2018 12:56:09 -0800 Subject: [ovirt-users] Confused by logical networks In-Reply-To: <0a60e95142e6577b4e210ed814d495c7@webmail.poltsi.fi> References: <0a60e95142e6577b4e210ed814d495c7@webmail.poltsi.fi> Message-ID: I had similar confusion, and trying to assign static IPs was just not working for me. Here is how I got it to work: #1 you absolutely require a DNS server you can add entries to. I tried to go without, and it just does not work. For me, management network is on vlan 10, so each host grabs a 172.16.10.xx address. The oVirt engine (self hosted) also grabs an address in this subnet. I look in the DHCP leases and turn those into static assignments in the router, but left the config on the hosts to DHCP. Add those IPs to your DNS so that you can reach each host, and the engine, at hostname.domain.net (or whatever your fully qualified domain name is for each) Now, set your VM network to DHCP on a different, specific VLAN. Set that vlan up in your router with a small DHCP pool, only big enough for the VMs you want to run. Make sure you allow that vlan through to the switchports connected to the nic you are using for your VMs. For me, the VMs are on vlan 30, so the subnet is 172.16.30.xx. Each VM boots up, makes a DHCP request on vlan 30, and is assigned an address. Then again go into your router and make this a static entry, leaving the config in the VM to DHCP. Once you install the agent in the VM, the IP will show up in the engine GUI. Then you can assign an FQDN to each vm's IP I'm your DNS server. Set up this way, I can migrate the VM to whichever host, and still access it over RDP by it's FQDN. I also have dual NICs but I set mine up as a team (bridge) instead of one for MGMT and one for VMs. That way, I can also unplug anyone Ethernet cable from any machine, and nothing happens, both the MGMT and VM network keep ticking away happily. I have each NIC plugged into a separate switch chassis, to gain redundancy on the switches. Hope some part of this helps you! On Feb 22, 2018 7:25 AM, "Paul-Erik T?rr?nen" wrote: > I'm not sure how the logical networks should work and would appreciate if > someone could shed some light into the matter, I've tried reading the > documentation? but have not become any wiser :-/ > > For the sake of argument, I have two hosts in the same cluster/DC, They > both have 2 network devices each (let's call them eth0 and eth1). On both > hosts the ovirtmgmgt is connected to eth0 and uses the 10.0.0.0/8-network. > Host 1 is 10.0.0.1 and host 2 is 10.0.0.2. All four network devices are > connected to one switch. > > Then I create a logical network, mylogic which should be 192.168.1.0/24, > which I assign to eth1 on each host, but define only for host 1 an > ip-address, 192.168.1.1, host 2 has also the network assigned to eth1, but > withouth an ip address. > > Next I create vm1 on host 1, give it a single virtual network connection > to mylogic, and configure the guest to use 192.168.1.2 with gw 192.168.1.1. > Obviously I can from the guest ping 192.168.1.1 which is the host address > on the logical network as the guest is running on the same hardware where > the host ip address is defined. > > However, and this is where my confusion lies, if I now create another vm, > vm2, on host 2, attach its network device to the mylogic network and > configure it to use 192.168.1.3 with gw 192.168.1.1, I can not ping neither > 192.168.1.1 nor 192.168.1.2. > > My understanding is that vm2 should be able to ping the wm1 as well as the > gateway address defined on host 1. However this does not seem to be the > case. > > What have I missed here? > > TIA, > > Poltsi > > ? https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/ > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffrsweet+ovirtusers at gmail.com Thu Feb 22 23:38:16 2018 From: geoffrsweet+ovirtusers at gmail.com (Geoff Sweet) Date: Thu, 22 Feb 2018 15:38:16 -0800 Subject: [ovirt-users] Installing Windows - no virtio-win.vfd as floppy option Message-ID: Howdy all, I'm following this document: https://www.ovirt.org/documentation/vmm-guide/chap-Installing_Windows_Virtual_Machines/ To launch a couple Windows VM's. The document outlines that the virtio-win.vfd file is automatically installed into the ISO storage domain for the engine host. And that you can upload it to other ISO domains as needed. For the life of me, I can not find that file anywhere across any of my ovirt servers. For reference this is 4.2.0. Am I missing something obvious? Thanks for any help you can offer. -Geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Thu Feb 22 23:50:50 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Thu, 22 Feb 2018 15:50:50 -0800 Subject: [ovirt-users] Installing Windows - no virtio-win.vfd as floppy option In-Reply-To: References: Message-ID: download the ISO separately and put it in your storage domain. Then use the "Change CD" function to load it part-way through the windows installation when it is looking for the disk to install to. Once you install all the drivers, the network will start working and the drives will show up and you'll be able to proceed with your windows installation. Following the instructions did not work for me either. *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Thu, Feb 22, 2018 at 3:38 PM, Geoff Sweet < geoffrsweet+ovirtusers at gmail.com> wrote: > Howdy all, I'm following this document: > > https://www.ovirt.org/documentation/vmm-guide/chap- > Installing_Windows_Virtual_Machines/ > > To launch a couple Windows VM's. The document outlines that the > virtio-win.vfd file is automatically installed into the ISO storage domain > for the engine host. And that you can upload it to other ISO domains as > needed. For the life of me, I can not find that file anywhere across any > of my ovirt servers. For reference this is 4.2.0. > > Am I missing something obvious? > > Thanks for any help you can offer. > > -Geoff > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffrsweet+ovirtusers at gmail.com Fri Feb 23 00:25:10 2018 From: geoffrsweet+ovirtusers at gmail.com (Geoff Sweet) Date: Thu, 22 Feb 2018 16:25:10 -0800 Subject: [ovirt-users] Installing Windows - no virtio-win.vfd as floppy option In-Reply-To: References: Message-ID: Yeah I just read that. I managed to discover that there is the "virtio-win" packages available in my repo, but apparently they aren't signed. So then I disovered the Fedora relased ones that at least are signed. Guess I will give those a whirl and see where it gets me. -G On Thu, Feb 22, 2018 at 3:50 PM, Vincent Royer wrote: > download the ISO separately and put it in your storage domain. Then use > the "Change CD" function to load it part-way through the windows > installation when it is looking for the disk to install to. Once you > install all the drivers, the network will start working and the drives will > show up and you'll be able to proceed with your windows installation. > > Following the instructions did not work for me either. > > *Vincent Royer* > *778-825-1057* > > > > *SUSTAINABLE MOBILE ENERGY SOLUTIONS* > > > > > On Thu, Feb 22, 2018 at 3:38 PM, Geoff Sweet < > geoffrsweet+ovirtusers at gmail.com> wrote: > >> Howdy all, I'm following this document: >> >> https://www.ovirt.org/documentation/vmm-guide/chap-Installin >> g_Windows_Virtual_Machines/ >> >> To launch a couple Windows VM's. The document outlines that the >> virtio-win.vfd file is automatically installed into the ISO storage domain >> for the engine host. And that you can upload it to other ISO domains as >> needed. For the life of me, I can not find that file anywhere across any >> of my ovirt servers. For reference this is 4.2.0. >> >> Am I missing something obvious? >> >> Thanks for any help you can offer. >> >> -Geoff >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vincent at epicenergy.ca Fri Feb 23 01:48:17 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Thu, 22 Feb 2018 17:48:17 -0800 Subject: [ovirt-users] Installing Windows - no virtio-win.vfd as floppy option In-Reply-To: References: Message-ID: Sure, feel free to reach out, I just did a big windows deployment and there were lots of hurdles to overcome, but it does work well in the end. I used the console during windows install, but I was doing it remotely, in an RDP session with a server on-site. So the mouse control was all out of whack, I had to keyboard through the setup. Once windows is installed and joined to the domain, you can RDP into it and it's much more bearable to operate than the console method. *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Thu, Feb 22, 2018 at 4:25 PM, Geoff Sweet < geoffrsweet+ovirtusers at gmail.com> wrote: > Yeah I just read that. I managed to discover that there is the > "virtio-win" packages available in my repo, but apparently they aren't > signed. So then I disovered the Fedora relased ones that at least are > signed. Guess I will give those a whirl and see where it gets me. > > -G > > On Thu, Feb 22, 2018 at 3:50 PM, Vincent Royer > wrote: > >> download the ISO separately and put it in your storage domain. Then use >> the "Change CD" function to load it part-way through the windows >> installation when it is looking for the disk to install to. Once you >> install all the drivers, the network will start working and the drives will >> show up and you'll be able to proceed with your windows installation. >> >> Following the instructions did not work for me either. >> >> *Vincent Royer* >> *778-825-1057 <(778)%20825-1057>* >> >> >> >> *SUSTAINABLE MOBILE ENERGY SOLUTIONS* >> >> >> >> >> On Thu, Feb 22, 2018 at 3:38 PM, Geoff Sweet < >> geoffrsweet+ovirtusers at gmail.com> wrote: >> >>> Howdy all, I'm following this document: >>> >>> https://www.ovirt.org/documentation/vmm-guide/chap-Installin >>> g_Windows_Virtual_Machines/ >>> >>> To launch a couple Windows VM's. The document outlines that the >>> virtio-win.vfd file is automatically installed into the ISO storage domain >>> for the engine host. And that you can upload it to other ISO domains as >>> needed. For the life of me, I can not find that file anywhere across any >>> of my ovirt servers. For reference this is 4.2.0. >>> >>> Am I missing something obvious? >>> >>> Thanks for any help you can offer. >>> >>> -Geoff >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From poltsi at poltsi.fi Fri Feb 23 05:16:14 2018 From: poltsi at poltsi.fi (=?UTF-8?Q?Paul-Erik_T=C3=B6rr=C3=B6nen?=) Date: Fri, 23 Feb 2018 07:16:14 +0200 Subject: [ovirt-users] Confused by logical networks In-Reply-To: References: <0a60e95142e6577b4e210ed814d495c7@webmail.poltsi.fi> Message-ID: <3ca7180272c3c2c004311032be0f12cb@webmail.poltsi.fi> On 2018-02-22 22:56, Vincent Royer wrote: > Hope some part of this helps you! Thanks for your answer, I need to digest it more in detail, but it seems like I indeed have missed some essential parts. I'll come back with a more detailed result later, but for the record, I do have a DNS server that I can manage completely in this case, and the switch management is also accessible to me. Poltsi From vincent at epicenergy.ca Fri Feb 23 05:22:28 2018 From: vincent at epicenergy.ca (Vincent Royer) Date: Thu, 22 Feb 2018 21:22:28 -0800 Subject: [ovirt-users] Confused by logical networks In-Reply-To: <3ca7180272c3c2c004311032be0f12cb@webmail.poltsi.fi> References: <0a60e95142e6577b4e210ed814d495c7@webmail.poltsi.fi> <3ca7180272c3c2c004311032be0f12cb@webmail.poltsi.fi> Message-ID: should work out for you then. In my case I was trying to do it all just with static addressing without dns entries and I quickly learned that it's just not designed to operate that way. Cheers *Vincent Royer* *778-825-1057* *SUSTAINABLE MOBILE ENERGY SOLUTIONS* On Thu, Feb 22, 2018 at 9:16 PM, Paul-Erik T?rr?nen wrote: > On 2018-02-22 22:56, Vincent Royer wrote: > >> Hope some part of this helps you! >> > > Thanks for your answer, I need to digest it more in detail, but it seems > like I indeed have missed some essential parts. > > I'll come back with a more detailed result later, but for the record, I do > have a DNS server that I can manage completely in this case, and the switch > management is also accessible to me. > > Poltsi > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Fri Feb 23 06:41:23 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Fri, 23 Feb 2018 08:41:23 +0200 Subject: [ovirt-users] Confused by logical networks In-Reply-To: <0a60e95142e6577b4e210ed814d495c7@webmail.poltsi.fi> References: <0a60e95142e6577b4e210ed814d495c7@webmail.poltsi.fi> Message-ID: On Feb 22, 2018 5:26 PM, "Paul-Erik T?rr?nen" wrote: I'm not sure how the logical networks should work and would appreciate if someone could shed some light into the matter, I've tried reading the documentation? but have not become any wiser :-/ For the sake of argument, I have two hosts in the same cluster/DC, They both have 2 network devices each (let's call them eth0 and eth1). On both hosts the ovirtmgmgt is connected to eth0 and uses the 10.0.0.0/8-network. Host 1 is 10.0.0.1 and host 2 is 10.0.0.2. All four network devices are connected to one switch. Then I create a logical network, mylogic which should be 192.168.1.0/24, which I assign to eth1 on each host, but define only for host 1 an ip-address, 192.168.1.1, host 2 has also the network assigned to eth1, but withouth an ip address. There's no reason really to assign IPs to hosts on the logical network. oVirt provides layer 2 (L2) connectivity to VMs. Whatever DHCP, DNS or gateway is on that network, they'll use. It should not be one of the hosts. Their interfaces just bridge the VM traffic over to the designated network, nothing more. Y. Next I create vm1 on host 1, give it a single virtual network connection to mylogic, and configure the guest to use 192.168.1.2 with gw 192.168.1.1. Obviously I can from the guest ping 192.168.1.1 which is the host address on the logical network as the guest is running on the same hardware where the host ip address is defined. However, and this is where my confusion lies, if I now create another vm, vm2, on host 2, attach its network device to the mylogic network and configure it to use 192.168.1.3 with gw 192.168.1.1, I can not ping neither 192.168.1.1 nor 192.168.1.2. My understanding is that vm2 should be able to ping the wm1 as well as the gateway address defined on host 1. However this does not seem to be the case. What have I missed here? TIA, Poltsi ? https://www.ovirt.org/documentation/admin-guide/chap-Logical_Networks/ _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From caignec at cines.fr Fri Feb 23 07:25:25 2018 From: caignec at cines.fr (Lionel Caignec) Date: Fri, 23 Feb 2018 08:25:25 +0100 (CET) Subject: [ovirt-users] Ghost Snapshot Disk Message-ID: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> Hi, i've a problem with snapshot. On one VM i've a "snapshot" ghost without name or uuid, only information is size (see attachment). In the snapshot tab there is no trace about this disk. In database (table images) i found this : f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | 2748779069440 | 00000000-0000-0000-0000-000000000000 | 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c | 2 | 4 | 17e26476-cecb-441d-a5f7-46ab3ef387ee | 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f | 1 | 2 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | 5368709120000 | 00000000-0000-0000-0000-000000000000 | 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c | 2 | 4 | bf834a91-c69f-4d2c-b639-116ed58296d8 | 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f | 1 | 2 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | 5368709120000 | 00000000-0000-0000-0000-000000000000 | 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 But i does not know which line is my disk. Is it possible to delete directly into database? Or is it better to dump my disk to another new and delete the "corrupted one"? Another thing, when i try to move the disk to another storage domain i always get "uncaght exeption occured ..." and no error in engine.log. Thank you for helping. -- Lionel Caignec -------------- next part -------------- A non-text attachment was scrubbed... Name: ghost_disk.png Type: image/png Size: 10101 bytes Desc: not available URL: From mzamazal at redhat.com Fri Feb 23 08:56:46 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Fri, 23 Feb 2018 09:56:46 +0100 Subject: [ovirt-users] VMs with multiple vdisks don't migrate In-Reply-To: (Maor Lipchuk's message of "Thu, 22 Feb 2018 17:56:21 +0200") References: <4663-5a8ed280-61-50066700@129695947> Message-ID: <87y3jkrsmp.fsf@redhat.com> Maor Lipchuk writes: > I encountered a bug (see [1]) which contains the same error mentioned in > your VDSM logs (see [2]), but I doubt it is related. Indeed, it's not related. The error in vdsm_victor.log just means that the info gathering call tries to access libvirt domain before the incoming migration is completed. It's ugly but harmless. > Milan, maybe you have any advice to troubleshoot the issue? Will the > libvirt/qemu logs can help? It seems there is something wrong on (at least) the source host. There are no migration progress messages in the vdsm_ginger.log and there are warnings about stale stat samples. That looks like problems with calling libvirt ? slow and/or stuck calls, maybe due to storage problems. The possibly faulty second disk could cause that. libvirt debug logs could tell us whether that is indeed the problem and whether it is caused by storage or something else. > I would suggest to open a bug on that issue so we can track it more > properly. > > Regards, > Maor > > > [1] > https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to > VM running on 2 Hosts > > [2] > 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] > Internal server error (__init__:577) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, > in _handle_request > res = method(**params) > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in > _dynamicMethod > result = fn(*methodArgs) > File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies > io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() > File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies > 'current_values': v.getIoTune()} > File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune > result = self.getIoTuneResponse() > File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse > res = self._dom.blockIoTune( > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, > in __getattr__ > % self.vmid) > NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not > started yet or was shut down > > On Thu, Feb 22, 2018 at 4:22 PM, fsoyer wrote: > >> Hi, >> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger >> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), >> while the engine.log in the first mail on 2018-02-12 was for VMs standing >> on victor, migrated (or failed to migrate...) to ginger. Symptoms were >> exactly the same, in both directions, and VMs works like a charm before, >> and even after (migration "killed" by a poweroff of VMs). >> Am I the only one experimenting this problem ? >> >> >> Thanks >> -- >> >> Cordialement, >> >> *Frank Soyer * >> >> >> >> Le Jeudi, F?vrier 22, 2018 00:45 CET, Maor Lipchuk >> a ?crit: >> >> >> Hi Frank, >> >> Sorry about the delay repond. >> I've been going through the logs you attached, although I could not find >> any specific indication why the migration failed because of the disk you >> were mentionning. >> Does this VM run with both disks on the target host without migration? >> >> Regards, >> Maor >> >> >> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer wrote: >>> >>> Hi Maor, >>> sorry for the double post, I've change the email adress of my account and >>> supposed that I'd need to re-post it. >>> And thank you for your time. Here are the logs. I added a vdisk to an >>> existing VM : it no more migrates, needing to poweroff it after minutes. >>> Then simply deleting the second disk makes migrate it in exactly 9s without >>> problem ! >>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 >>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d >>> >>> -- >>> >>> Cordialement, >>> >>> *Frank Soyer * >>> Le Mercredi, F?vrier 14, 2018 11:04 CET, Maor Lipchuk < >>> mlipchuk at redhat.com> a ?crit: >>> >>> >>> Hi Frank, >>> >>> I already replied on your last email. >>> Can you provide the VDSM logs from the time of the migration failure for >>> both hosts: >>> ginger.local.systea.f r and v >>> ictor.local.systea.fr >>> >>> Thanks, >>> Maor >>> >>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: >>>> >>>> Hi all, >>>> I discovered yesterday a problem when migrating VM with more than one >>>> vdisk. >>>> On our test servers (oVirt4.1, shared storage with Gluster), I created 2 >>>> VMs needed for a test, from a template with a 20G vdisk. On this VMs I >>>> added a 100G vdisk (for this tests I didn't want to waste time to extend >>>> the existing vdisks... But I lost time finally...). The VMs with the 2 >>>> vdisks works well. >>>> Now I saw some updates waiting on the host. I tried to put it in >>>> maintenance... But it stopped on the two VM. They were marked "migrating", >>>> but no more accessible. Other (small) VMs with only 1 vdisk was migrated >>>> without problem at the same time. >>>> I saw that a kvm process for the (big) VMs was launched on the source >>>> AND destination host, but after tens of minutes, the migration and the VMs >>>> was always freezed. I tried to cancel the migration for the VMs : failed. >>>> The only way to stop it was to poweroff the VMs : the kvm process died on >>>> the 2 hosts and the GUI alerted on a failed migration. >>>> In doubt, I tried to delete the second vdisk on one of this VMs : it >>>> migrates then without error ! And no access problem. >>>> I tried to extend the first vdisk of the second VM, the delete the >>>> second vdisk : it migrates now without problem ! >>>> >>>> So after another test with a VM with 2 vdisks, I can say that this >>>> blocked the migration process :( >>>> >>>> In engine.log, for a VMs with 1 vdisk migrating well, we see : >>>> >>>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired >>>> to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>>> sharedLocks=''}' >>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>> Running command: MigrateVmToServerCommand internal: false. Entities >>>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction >>>> group MIGRATE_VM with role type USER >>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>> params=[]}}]]'}), log id: 14f61ee0 >>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>>> [2f712024-5982-46a8-82c8-fd8293da5725] START, >>>> MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, >>>> MigrateVDSCommandParameters:{runAsync='true', >>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>> params=[]}}]]'}), log id: 775cd381 >>>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, >>>> log id: 775cd381 >>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 >>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db >>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) >>>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: >>>> VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, >>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom >>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>>> ginger.local.systea.fr, User: admin at internal-authz). >>>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>>> START, FullListVDSCommand(HostName = victor.local.systea.fr, >>>> FullListVDSCommandParameters:{runAsync='true', >>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 >>>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>>> emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, >>>> guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>>> timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, >>>> guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, >>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>> vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, >>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, >>>> kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, >>>> devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, >>>> clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 >>>> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) >>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' >>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>> [54a65b66] Received a vnc Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>> displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>> port=5901} >>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>> [54a65b66] Received a lease Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>> was unexpectedly detected as 'MigratingTo' on VDS >>>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) >>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') >>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>>> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( >>>> ginger.local.systea.fr) ignoring it in the refresh until migration is >>>> done >>>> .... >>>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>>> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>> victor.local.systea.fr) >>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, >>>> DestroyVDSCommand(HostName = victor.local.systea.fr, >>>> DestroyVmVDSCommandParameters:{runAsync='true', >>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', >>>> secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log >>>> id: 560eca57 >>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, >>>> DestroyVDSCommand, log id: 560eca57 >>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>> moved from 'MigratingFrom' --> 'Down' >>>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status >>>> 'MigratingTo' >>>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>> moved from 'MigratingTo' --> 'Up' >>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>>> START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, >>>> MigrateStatusVDSCommandParameters:{runAsync='true', >>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 >>>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281 >>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db >>>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] >>>> EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: >>>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: >>>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: >>>> null, Custom Event ID: -1, Message: Migration completed (VM: >>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual >>>> downtime: (N/A)) >>>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (ForkJoinPool-1-worker-4) [] Lock freed to object >>>> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>>> sharedLocks=''}' >>>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, >>>> FullListVDSCommand(HostName = ginger.local.systea.fr, >>>> FullListVDSCommandParameters:{runAsync='true', >>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 >>>> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, >>>> FullListVDSCommand, return: [{acpiEnable=true, >>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>>> tabletEnable=true, pid=18748, guestDiskMapping={}, >>>> transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, >>>> guestNumaNodes=[Ljava.lang.Object;@760085fd, >>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, >>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 >>>> <(430)%20425-9600>, display=vnc}], log id: 7cc65298 >>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>> Received a vnc Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>> port=5901} >>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>> Received a lease Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] >>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>>> tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H >>>> ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>>> timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj >>>> ect;@77951faf, custom={device_fbddd528-7d93-4 >>>> 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc >>>> c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, >>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 >>>> <(430)%20426-3620>, display=vnc}], log id: 58cdef4c >>>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>> [7fcb200a] Received a vnc Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>> port=5901} >>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>> [7fcb200a] Received a lease Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>> >>>> >>>> >>>> >>>> For the VM with 2 vdisks we see : >>>> >>>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired >>>> to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', >>>> sharedLocks=''}' >>>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>> Running command: MigrateVmToServerCommand internal: false. Entities >>>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction >>>> group MIGRATE_VM with role type USER >>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>> params=[]}}]]'}), log id: 3702a9e0 >>>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, >>>> MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, >>>> MigrateVDSCommandParameters:{runAsync='true', >>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>> params=[]}}]]'}), log id: 1840069c >>>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, >>>> log id: 1840069c >>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 >>>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db >>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) >>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: >>>> VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, >>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom >>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>> Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: >>>> victor.local.systea.fr, User: admin at internal-authz). >>>> ... >>>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) >>>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' >>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>>> was unexpectedly detected as 'MigratingTo' on VDS >>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>> victor.local.systea.fr) ignoring it in the refresh until migration is >>>> done >>>> ... >>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>>> was unexpectedly detected as 'MigratingTo' on VDS >>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>> victor.local.systea.fr) ignoring it in the refresh until migration is >>>> done >>>> >>>> >>>> >>>> and so on, last lines repeated indefinitly for hours since we poweroff >>>> the VM... >>>> Is this something known ? Any idea about that ? >>>> >>>> Thanks >>>> >>>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. >>>> >>>> -- >>>> >>>> Cordialement, >>>> >>>> *Frank Soyer * >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >>> >>> >>> >> >> >> From andreil1 at starlett.lv Fri Feb 23 11:35:48 2018 From: andreil1 at starlett.lv (Andrei V) Date: Fri, 23 Feb 2018 13:35:48 +0200 Subject: [ovirt-users] Can't move/copy VM disks between Data Centers Message-ID: <0D950DC4-A8E3-4A39-B557-5E122AA38DE6@starlett.lv> Hi, I have oVirt setup, separate PC host engine + 2 nodes (#10 + #11) with local storage domains (internal RAIDs). 1st node #10 is currently active and can?t be turned off. Since oVirt doesn?t support more then 1 host in data center with local storage domain as described here: http://lists.ovirt.org/pipermail/users/2018-January/086118.html defined another data center with 1 node #11. Problem: 1) can?t copy or move VM disks from node #10 (even of inactive VMs) to node #11, this node is NOT being shown as possible destination. 2) can?t migrate active VMs to node #11. 3) Added NFS shares to data center #1 -> node #10, but can?t change data center #1 -> storage type to Shared, because this operation requires detachment of local storage domains, which is not possible, several VMs are active and can?t be stopped. VM disks placed on local storage domains because of performance limitations of our 1Gbit network. 2 VMs running our accounting/inventory control system, and are critical to NFS storage performance limits. How to solve this problem ? Thanks in advance. Andrei -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Fri Feb 23 10:31:37 2018 From: recreationh at gmail.com (Terry hey) Date: Fri, 23 Feb 2018 18:31:37 +0800 Subject: [ovirt-users] VM is locked, servlet , and SpiceVersion.txt problem Message-ID: Hello everyone! Thank for your time to analyize my problem. Totally, i have two question. I encountered vm image lock problem. The following action is what i have to and make the vm image locked. First, i imported a vm. Since it is take too long time to import and the engine.log always repeatly said it was waiting child command id. "2018-02-23 16:37:46,603+08 INFO [org.ovirt.engine.core.bll.ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-16) [1c44d543-4dcc-429d-a172-386cc860afe0] Command 'ImportVm' (id: '09718bd2-797d-4323-b1ad-1a85604543c3') waiting on child command id: '9e285b2d-c0c7-4a75-8c70-b619b45c6855' type:'CopyImageGroup' to complete " So,I thought the operation was not normal. So, 1. I use "./unlock_entity.sh" to unlock the virtual disk of the vm. 2. Virtual disk was unlocked but vm was still locked. Therefore, i use "./unlock_entity.sh" to show locked vm. But there was nothing. 3. Then i used "./taskcleaner.sh" to clean all task. But nothing happen. Q1: So, now, i would like to ask how to unlock the vm image so that i can delete or use it. Q2: In addition, there are two error or warning appeared in engine.log 1. 2018-02-23 09:57:19,495+08 WARN [org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-272) [] File'/usr/share/ovirt-engine/ui-plugins/dashboard-resources/css/main-tab.d3769419.css' is 2839039 bytes long. Please reconsider using this servlet for files larger than 1048576 bytes. 2. 2018-02-23 09:47:39,656+08 ERROR [org.ovirt.engine.core.utils.servlet.ServletUtils] (default task-193) [] Can't read file '/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' for request '/ovirt-engine/services/files/spice/SpiceVersion.txt', will send a 404 error response. Do you guys have any idea what do they mean? I really appreciate you help. Thank you! Regards Terry -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Fri Feb 23 10:34:00 2018 From: recreationh at gmail.com (Terry hey) Date: Fri, 23 Feb 2018 18:34:00 +0800 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: Dear Martin, I am very sorry that i reply you so late. Do you mean that 4.2 can support ilo5 by selecting the option "ilo4" in power management? "from the error message below I'd say that you are either not using correct IP address of iLO5 interface or you haven't enabled remote access to your iLO5 interface" I just try it and double confirm that i did not type a wrong IP. But the error message is same. Regards Terry 2018-02-08 16:13 GMT+08:00 Martin Perina : > Hi Terry, > > from the error message below I'd say that you are either not using correct > IP address of iLO5 interface or you haven't enabled remote access to your > iLO5 interface. > According to [1] iLO5 should fully IPMI compatible. So are you sure that > you enabled the remote access to your iLO5 address in iLO5 management? > Please consult [1] how to enable everything and use a user with at least > Operator privileges. > > Regards > > Martin > > [1] https://support.hpe.com/hpsc/doc/public/display?docId=a00018324en_us > > > On Thu, Feb 8, 2018 at 7:57 AM, Terry hey wrote: > >> Dear Martin, >> >> Thank you for helping me. To answer your question, >> 1. Does the Test in Edit fence agent dialog work?? >> Ans: it shows that "Test failed: Internal JSON-RPC error" >> >> Regardless the fail result, i press "OK" to enable power management. >> There are four event log appear in "Events" >> ********************************The follwing are the log in >> "Event""******************************** >> Host host01 configuration was updated by admin at internal-authz. >> Kdump integration is enabled for host hostv01, but kdump is not >> configured properly on host. >> Health check on Host host01 indicates that future attempts to Stop this >> host using Power-Management are expected to fail. >> Health check on Host host01 indicates that future attempts to Start this >> host using Power-Management are expected to fail. >> >> 2. If not could you please try to install fence-agents-all package on >> different host and execute? >> Ans: It just shows "Connection timed out". >> >> So, does it means that it is not support iLo5 now or i configure wrongly? >> >> Regards, >> Terry >> >> 2018-02-02 15:46 GMT+08:00 Martin Perina : >> >>> >>> >>> On Fri, Feb 2, 2018 at 5:40 AM, Terry hey wrote: >>> >>>> Dear Martin, >>>> >>>> Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to setup >>>> oVirt Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. Therefore, i >>>> would like to ask whether oVirt power management support iLO5 or not. >>>> >>> >>> ?We don't have any hardware with iLO5 available, but there is a good >>> chance that it will be compatible with iLO4. Have you tried to setup your >>> server with iLO4? Does the Test in Edit fence agent dialog work?? If not >>> could you please try to install fence-agents-all package on different host >>> and execute following: >>> >>> fence_ilo4 -a -l -p -v -o status >>> >>> and share the output? >>> >>> Thanks >>> >>> Martin >>> >>> >>>> If not, do you have any idea to setup power management with HP G10? >>>> >>>> Regards, >>>> Terry >>>> >>>> 2018-02-01 16:21 GMT+08:00 Martin Perina : >>>> >>>>> >>>>> >>>>> On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < >>>>> lorenzetto.luca at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. Try >>>>>> using the standard ipmi. >>>>>> >>>>> >>>>> ?It's not just an alias, ilo3/ilo4 also have different defaults than >>>>> ipmilan. For example if you use ilo4, then by default following is used: >>>>> >>>>> ? >>>>> >>>>> ?lanplus=1 >>>>> power_wait=4 >>>>> >>>>> ?So I recommend to start with ilo4 and add any necessary custom >>>>> options into Options field. If you need some custom >>>>> options, could you please share them with us? It would be very helpful >>>>> for us, if needed we could introduce ilo5 with >>>>> different defaults then ilo4 >>>>> >>>>> Thanks >>>>> >>>>> Martin >>>>> >>>>> >>>>>> Luca >>>>>> >>>>>> >>>>>> >>>>>> Il 31 gen 2018 11:14 PM, "Terry hey" ha >>>>>> scritto: >>>>>> >>>>>>> Dear all, >>>>>>> Did oVirt 4.2 Power management support iLO5 as i could not see iLO5 >>>>>>> option in Power Management. >>>>>>> >>>>>>> Regards >>>>>>> Terry >>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> _______________________________________________ >>>>>> Users mailing list >>>>>> Users at ovirt.org >>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Martin Perina >>>>> Associate Manager, Software Engineering >>>>> Red Hat Czech s.r.o. >>>>> >>>> >>>> >>> >>> >>> -- >>> Martin Perina >>> Associate Manager, Software Engineering >>> Red Hat Czech s.r.o. >>> >> >> > > > -- > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arsene.gschwind at unibas.ch Fri Feb 23 12:31:21 2018 From: arsene.gschwind at unibas.ch (=?UTF-8?Q?Ars=c3=a8ne_Gschwind?=) Date: Fri, 23 Feb 2018 13:31:21 +0100 Subject: [ovirt-users] Upgrade Cluster Compat Level from 4.1 to 4.2 In-Reply-To: References: <619b568d-6f4e-32c3-b516-2d6dc6a8914d@unibas.ch> Message-ID: <1288d11e-f8a8-5050-000e-2549cbdc7ed8@unibas.ch> Hi Machal, Thanks, I've forgot to set the hook property for version 4.2. That did the trick. Rgds, Arsene On 02/22/2018 09:42 PM, Michal Skrivanek wrote: > > >> On 22 Feb 2018, at 20:23, Ars?ne Gschwind > > wrote: >> >> Hi, >> >> I could successfully upgrade our Ovirt Environment from 4.1.9 to >> 4.2.1, really great job with the new interface. >> Everything runs well so far, the only problem I have is when trying >> to Upgrade the Cluster Compatibility Level from 4.1 to 4.2 it throw >> an error : >> >> Error while executing action: Update of cluster compatibility version >> failed because there are VMs/Templates [spfy-tscon] with incorrect >> configuration. To fix the issue, please go to each of them, edit and >> press OK. If the save does not pass, fix the dialog validation. >> >> This VM is a Windows Server 2016 System with a custom property using >> smbios hook, could that be the problem? >> > Hi, > if it?s a user defined custom property you need to define it for the > new cluster level too, similarly as you did originally > > Thanks, > michal >> >> The engine log don't help a lot: >> >> /2018-02-22 18:49:42,026+01 ERROR >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> (default task-57) [595850d5] EVENT_ID: >> CLUSTER_CANNOT_UPDATE_VM_COMPATIBILITY_VERSION(12,005), Cannot update >> compatibility version of Vm/Template: [spfy-tscon], Message: [No >> Message]// >> / >> >> Are there some other place to investigate and get some more >> information about this error? >> >> Thanks a lot for any Hint/Help. >> >> rgds, >> Arsene >> >> -- >> >> *Ars?ne Gschwind* >> Fa. Sapify AG im Auftrag der Universit?t Basel >> IT Services >> Klingelbergstr. 70?|? CH-4056 Basel? | Switzerland >> Tel. +41 79 449 25 63? | http://its.unibas.ch >> ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > -- *Ars?ne Gschwind* Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland Tel. +41 79 449 25 63? | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjelinek at redhat.com Fri Feb 23 12:36:35 2018 From: tjelinek at redhat.com (Tomas Jelinek) Date: Fri, 23 Feb 2018 13:36:35 +0100 Subject: [ovirt-users] VM is locked, servlet , and SpiceVersion.txt problem In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 11:31 AM, Terry hey wrote: > Hello everyone! > Thank for your time to analyize my problem. Totally, i have two question. > I encountered vm image lock problem. The following action is what i have > to and make the vm image locked. > First, i imported a vm. Since it is take too long time to import and the > engine.log always repeatly said it was waiting child command id. > "2018-02-23 16:37:46,603+08 INFO [org.ovirt.engine.core.bll. > ConcurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-16) > [1c44d543-4dcc-429d-a172-386cc860afe0] Command 'ImportVm' (id: > '09718bd2-797d-4323-b1ad-1a85604543c3') waiting on child command id: > '9e285b2d-c0c7-4a75-8c70-b619b45c6855' type:'CopyImageGroup' to complete > " > So,I thought the operation was not normal. So, > 1. I use "./unlock_entity.sh" to unlock the virtual disk of the vm. > 2. Virtual disk was unlocked but vm was still locked. Therefore, i use > "./unlock_entity.sh" to show locked vm. But there was nothing. > 3. Then i used "./taskcleaner.sh" to clean all task. But nothing happen. > > Q1: So, now, i would like to ask how to unlock the vm image so that i can > delete or use it. > > Q2: In addition, there are two error or warning appeared in engine.log > 1. 2018-02-23 09:57:19,495+08 WARN [org.ovirt.engine.core.utils.servlet.ServletUtils] > (default task-272) [] File'/usr/share/ovirt-engine/ui-plugins/dashboard- > resources/css/main-tab.d3769419.css' is 2839039 bytes long. Please > reconsider using this servlet for files larger than 1048576 bytes. > 2. 2018-02-23 09:47:39,656+08 ERROR [org.ovirt.engine.core.utils.servlet.ServletUtils] > (default task-193) [] Can't read file '/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' > for request '/ovirt-engine/services/files/spice/SpiceVersion.txt', will > send a 404 error response. > Do you guys have any idea what do they mean? > This is pretty cool :) The SpiceVersion.txt has been used years ago for the ActiveX SPICE client. This client is not in oVirt for ages, but we still have the handling of this file in the code forgotten. I have opened a bug to clean this up: https://bugzilla.redhat.com/show_bug.cgi?id=1548407 But don't worry about this error, it has no effect on the function. > > I really appreciate you help. Thank you! > > Regards > Terry > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at bootc.boo.tc Fri Feb 23 13:31:12 2018 From: lists at bootc.boo.tc (Chris Boot) Date: Fri, 23 Feb 2018 13:31:12 +0000 Subject: [ovirt-users] Change management network In-Reply-To: References: Message-ID: <8e2a2323-8d72-c908-e23d-e5a49e1e0c41@bootc.boo.tc> On 22/02/18 17:15, Chris Boot wrote: > Hi all, > > I have an oVirt cluster on which I need to change which VLAN is the > management network. > > The new management network is an existing VM network. I've configured IP > addresses for all the hosts on this network, and I've even moved the > HostedEngine VM onto this network. So far so good. > > What I cannot seem to be able to do is actually change the "management > network" toggle in the cluster to this network: the oVirt Engine > complains saying: > > "Error while executing action: Cannot edit Network. Changing management > network in a non-empty cluster is not allowed." > > How can I get around this? I clearly cannot empty the cluster, as the > cluster contains all my existing VMs, hosts and HostedEngine. It seems I have to create a new cluster, migrate a host over, migrate a few VMs, and so on until everything is moved over. This really isn't ideal as the VMs have to be shut down and reconfigured, but doable. What I seem to be stuck on is changing the cluster on the HostedEngine. I actually have it running on a host in the new cluster, but it still appears in the old cluster on the web interface with no way to change this. Any hints, please? This is on oVirt 4.1.9. Upgrading to 4.2.1 is not out of the question if it's likely to help. Thanks, Chris -- Chris Boot bootc at boo.tc From gianluca.cecchi at gmail.com Fri Feb 23 13:57:05 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Fri, 23 Feb 2018 14:57:05 +0100 Subject: [ovirt-users] ovn related events every 5 minutes on 4.2.1 Message-ID: Hello, in my events pane of 4.2.1 I see, every 5 minutes, this event Networks of Provider ovirt-provider-ovn were successfully synchronized. that fills so my table and prevent easy reading of other ones... Can I disable or "relax" this? Thanks, Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From arsene.gschwind at unibas.ch Fri Feb 23 14:27:52 2018 From: arsene.gschwind at unibas.ch (=?UTF-8?Q?Ars=c3=a8ne_Gschwind?=) Date: Fri, 23 Feb 2018 15:27:52 +0100 Subject: [ovirt-users] Network and disk inactive after 4.2.1 upgrade In-Reply-To: <20180213150534.GA15147@cmadams.net> References: <20180213150534.GA15147@cmadams.net> Message-ID: <8037634a-2e17-fa6f-76de-3f7fd61685d9@unibas.ch> Hi Chris, After upgrading from 4.1.9 to 4.2.1 I had the same problem. Had to reactivate network and disk on all VMs. rgds, Arsene On 02/13/2018 04:05 PM, Chris Adams wrote: > I upgraded my dev cluster from 4.2.0 to 4.2.1 yesterday, and I noticed > that all my VMs show the network interfaces unplugged and disks inactive > (despite the VMs being up and running just fine). This includes the > hosted engine. > > I had not rebooted VMs after upgrading, so I tried powering one off and > on; it would not start until I manually activated the disk. > > I haven't seen a problem like this before (although it usually means > that I did something wrong :) ) - what should I look at? -- *Ars?ne Gschwind* Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland Tel. +41 79 449 25 63? | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yxpengi386 at 163.com Fri Feb 23 16:05:34 2018 From: yxpengi386 at 163.com (pengyixiang) Date: Sat, 24 Feb 2018 00:05:34 +0800 (CST) Subject: [ovirt-users] restore snapshot cannot restore memory Message-ID: <4c04494d.7cd2.161c3699e9b.Coremail.yxpengi386@163.com> hello I found if we retore snapshot, memory cannot be restored, I test it with ovirt-4.1.2?vdsm-4.17.0 and libvirt-3.0.0, and i get some errors in [1],it seems vm not paused in creating snapshot, but self._underlyingCont() called in vm starting, so error occurs, then vm is started in libvirt but shutdowned in vdsm, changes in [2], then it works well. [1] 2018-02-12 19:39:23,830+0800 ERROR (vm/d7be0fde) [virt.vm] (vmId='d7be0fde-f9b9-4447-a250-2453482faef9') The vm start process failed (vm:662) Traceback (most recent call last): File "/usr/share/vdsm/virt/vm.py", line 607, in _startUnderlyingVm self._completeIncomingMigration() File "/usr/share/vdsm/virt/vm.py", line 3268, in _completeIncomingMigration self.cont() File "/usr/share/vdsm/virt/vm.py", line 1128, in cont self._underlyingCont() File "/usr/share/vdsm/virt/vm.py", line 3368, in _underlyingCont self._dom.resume() File "/usr/lib/python2.7/dist-packages/vdsm/virt/virdomain.py", line 69, in f ret = attr(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/vdsm/libvirtconnection.py", line 123, in wrapper ret = f(*args, **kwargs) File "/usr/lib/python2.7/dist-packages/vdsm/utils.py", line 926, in wrapper return func(inst, *args, **kwargs) File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1905, in resume if ret == -1: raise libvirtError ('virDomainResume() failed', dom=self) libvirtError: Requested operation is not valid: domain is already running [2] --- a/Linx_Node/node_iso/install_script/py/vdsm/vdsm/virt/vm.py +++ b/Linx_Node/node_iso/install_script/py/vdsm/vdsm/virt/vm.py @@ -3677,6 +3677,8 @@ class Vm(object): else: snapFlags |= libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY + self._underlyingPause() + # When creating memory snapshot libvirt will pause the vm should_freeze = not (memoryParams or frozen) @@ -3734,6 +3736,8 @@ class Vm(object): if memoryParams: self.cif.teardownVolumePath(memoryVol) + self._underlyingCont() + # Returning quiesce to notify the manager whether the guest agent # froze and flushed the filesystems or not. quiesce = should_freeze and freezed["status"]["code"] == 0 -------------- next part -------------- An HTML attachment was scrubbed... URL: From erekle.magradze at recogizer.de Fri Feb 23 17:43:21 2018 From: erekle.magradze at recogizer.de (Erekle Magradze) Date: Fri, 23 Feb 2018 18:43:21 +0100 Subject: [ovirt-users] rebooting hypervisors from time to time In-Reply-To: References: <25891cac-5d5a-8dac-421d-e8e19277d3de@recogizer.de> Message-ID: Hi, Thanks a lot for having a look. HA VMs were migrated, non HA vms were turned off, syslogs were not saying anything useful, dmesg reported graceful reboot. What errors are indicating? may be there is a useful hint to proceed in investigation? Thanks in advance again Cheers Erekle On 02/23/2018 06:15 PM, Mahdi Adnan wrote: > Hi, > > The log does't indicate HV reboot, and i see lots of errors in the logs. > During the reboot, what happened to the VM inside of the HV ? migrated > ? paused ? what about the system's logs ? does it indicate a graceful > shutdown ? > > > -- > > Respectfully* > **Mahdi A. Mahdi* > > ------------------------------------------------------------------------ > *From:* Erekle Magradze > *Sent:* Friday, February 23, 2018 2:48 PM > *To:* Mahdi Adnan; users at ovirt.org > *Subject:* Re: [ovirt-users] rebooting hypervisors from time to time > > Thanks for the reply, > > I've attached all the logs from yesterday, reboot has happened during > the day but this is not the first time and this is not the only one > hypervisor. > > Kind Regards > > Erekle > > > On 02/23/2018 09:00 AM, Mahdi Adnan wrote: >> Hi, >> >> Can you post the VDSM and Engine logs ? >> >> >> -- >> >> Respectfully* >> **Mahdi A. Mahdi* >> >> ------------------------------------------------------------------------ >> *From:* users-bounces at ovirt.org >> on behalf >> of Erekle Magradze >> >> *Sent:* Thursday, February 22, 2018 11:48 PM >> *To:* users at ovirt.org >> *Subject:* Re: [ovirt-users] rebooting hypervisors from time to time >> Dear all, >> >> It would be great if someone will share any experience regarding the >> similar case, would be great to have a hint where to start investigation. >> >> Thanks again >> >> Cheers >> >> Erekle >> >> >> On 02/22/2018 05:05 PM, Erekle Magradze wrote: >> > Hello there, >> > >> > I am facing the following problem from time to time one of the >> > hypervisor (there are 3 of them)s is rebooting, I am using >> > ovirt-release42-4.2.1-1.el7.centos.noarch and glsuter as a storage >> > backend (glusterfs-3.12.5-2.el7.x86_64). >> > >> > I am suspecting gluster because of the e.g. message bellow from one of >> > the volumes, >> > >> > Could you please help and suggest to which direction should >> > investigation go? >> > >> > Thanks in advance >> > >> > Cheers >> > >> > Erekle >> > >> > >> > [2018-02-22 15:36:10.011687] and [2018-02-22 15:37:10.955013] >> > [2018-02-22 15:41:10.198701] I [MSGID: 109063] >> > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found >> > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). >> > Holes=1 overlaps=0 >> > [2018-02-22 15:41:10.198704] I [MSGID: 109063] >> > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found >> > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). >> > Holes=1 overlaps=0 >> > [2018-02-22 15:42:11.293608] I [MSGID: 109063] >> > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found >> > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). >> > Holes=1 overlaps=0 >> > [2018-02-22 15:53:16.245720] I [MSGID: 100030] >> > [glusterfsd.c:2524:main] 0-/usr/sbin/glusterfs: Started running >> > /usr/sbin/glusterfs version 3.12.5 (args: /usr/sbin/glusterfs >> > --volfile-server=10.0.0.21 --volfi >> > le-server=10.0.0.22 --volfile-server=10.0.0.23 >> > --volfile-id=/virtimages >> > /rhev/data-center/mnt/glusterSD/10.0.0.21:_virtimages) >> > [2018-02-22 15:53:16.263712] W [MSGID: 101002] >> > [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' >> > is deprecated, preferred is 'transport.address-family', continuing >> > with correction >> > [2018-02-22 15:53:16.269595] I [MSGID: 101190] >> > [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started >> > thread with index 1 >> > [2018-02-22 15:53:16.273483] I [MSGID: 101190] >> > [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started >> > thread with index 2 >> > [2018-02-22 15:53:16.273594] W [MSGID: 101174] >> > [graph.c:363:_log_if_unknown_option] 0-virtimages-readdir-ahead: >> > option 'parallel-readdir' is not recognized >> > [2018-02-22 15:53:16.273703] I [MSGID: 114020] [client.c:2360:notify] >> > 0-virtimages-client-0: parent translators are ready, attempting >> > connect on transport >> > [2018-02-22 15:53:16.276455] I [MSGID: 114020] [client.c:2360:notify] >> > 0-virtimages-client-1: parent translators are ready, attempting >> > connect on transport >> > [2018-02-22 15:53:16.276683] I [rpc-clnt.c:1986:rpc_clnt_reconfig] >> > 0-virtimages-client-0: changing port to 49152 (from 0) >> > [2018-02-22 15:53:16.279191] I [MSGID: 114020] [client.c:2360:notify] >> > 0-virtimages-client-2: parent translators are ready, attempting >> > connect on transport >> > [2018-02-22 15:53:16.282126] I [MSGID: 114057] >> > [client-handshake.c:1478:select_server_supported_programs] >> > 0-virtimages-client-0: Using Program GlusterFS 3.3, Num (1298437), >> > Version (330) >> > [2018-02-22 15:53:16.282573] I [MSGID: 114046] >> > [client-handshake.c:1231:client_setvolume_cbk] 0-virtimages-client-0: >> > Connected to virtimages-client-0, attached to remote volume >> > '/mnt/virtimages/virtimgs'. >> > [2018-02-22 15:53:16.282584] I [MSGID: 114047] >> > [client-handshake.c:1242:client_setvolume_cbk] 0-virtimages-client-0: >> > Server and Client lk-version numbers are not same, reopening the fds >> > [2018-02-22 15:53:16.282665] I [MSGID: 108005] >> > [afr-common.c:4929:__afr_handle_child_up_event] >> > 0-virtimages-replicate-0: Subvolume 'virtimages-client-0' came back >> > up; going online. >> > [2018-02-22 15:53:16.282877] I [rpc-clnt.c:1986:rpc_clnt_reconfig] >> > 0-virtimages-client-1: changing port to 49152 (from 0) >> > [2018-02-22 15:53:16.282934] I [MSGID: 114035] >> > [client-handshake.c:202:client_set_lk_version_cbk] >> > 0-virtimages-client-0: Server lk version = 1 >> > >> > _______________________________________________ >> > Users mailing list >> > Users at ovirt.org >> > http://lists.ovirt.org/mailman/listinfo/users >> >> -- >> Recogizer Group GmbH >> >> Dr.rer.nat. Erekle Magradze >> Lead Big Data Engineering & DevOps >> Rheinwerkallee 2, 53227 Bonn >> Tel: +49 228 29974555 >> >> E-Mail erekle.magradze at recogizer.de >> recogizer.com >> >> ----------------------------------------------------------------- >> >> Recogizer Group GmbH >> Gesch?ftsf?hrer: Oliver Habisch, Carsten Kreutze >> Handelsregister: Amtsgericht Bonn HRB 20724 >> Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993 >> Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte >> Informationen. Wenn Sie nicht der richtige Adressat sind oder diese >> E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den >> Absender und l?schen Sie diese Mail. Das unerlaubte Kopieren sowie >> die unbefugte Weitergabe dieser Mail und der darin enthaltenen >> Informationen ist nicht gestattet. >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahdi.adnan at outlook.com Fri Feb 23 08:00:07 2018 From: mahdi.adnan at outlook.com (Mahdi Adnan) Date: Fri, 23 Feb 2018 08:00:07 +0000 Subject: [ovirt-users] rebooting hypervisors from time to time In-Reply-To: References: , Message-ID: Hi, Can you post the VDSM and Engine logs ? -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces at ovirt.org on behalf of Erekle Magradze Sent: Thursday, February 22, 2018 11:48 PM To: users at ovirt.org Subject: Re: [ovirt-users] rebooting hypervisors from time to time Dear all, It would be great if someone will share any experience regarding the similar case, would be great to have a hint where to start investigation. Thanks again Cheers Erekle On 02/22/2018 05:05 PM, Erekle Magradze wrote: > Hello there, > > I am facing the following problem from time to time one of the > hypervisor (there are 3 of them)s is rebooting, I am using > ovirt-release42-4.2.1-1.el7.centos.noarch and glsuter as a storage > backend (glusterfs-3.12.5-2.el7.x86_64). > > I am suspecting gluster because of the e.g. message bellow from one of > the volumes, > > Could you please help and suggest to which direction should > investigation go? > > Thanks in advance > > Cheers > > Erekle > > > [2018-02-22 15:36:10.011687] and [2018-02-22 15:37:10.955013] > [2018-02-22 15:41:10.198701] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:41:10.198704] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:42:11.293608] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:53:16.245720] I [MSGID: 100030] > [glusterfsd.c:2524:main] 0-/usr/sbin/glusterfs: Started running > /usr/sbin/glusterfs version 3.12.5 (args: /usr/sbin/glusterfs > --volfile-server=10.0.0.21 --volfi > le-server=10.0.0.22 --volfile-server=10.0.0.23 > --volfile-id=/virtimages > /rhev/data-center/mnt/glusterSD/10.0.0.21:_virtimages) > [2018-02-22 15:53:16.263712] W [MSGID: 101002] > [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' > is deprecated, preferred is 'transport.address-family', continuing > with correction > [2018-02-22 15:53:16.269595] I [MSGID: 101190] > [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started > thread with index 1 > [2018-02-22 15:53:16.273483] I [MSGID: 101190] > [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started > thread with index 2 > [2018-02-22 15:53:16.273594] W [MSGID: 101174] > [graph.c:363:_log_if_unknown_option] 0-virtimages-readdir-ahead: > option 'parallel-readdir' is not recognized > [2018-02-22 15:53:16.273703] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-0: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.276455] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-1: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.276683] I [rpc-clnt.c:1986:rpc_clnt_reconfig] > 0-virtimages-client-0: changing port to 49152 (from 0) > [2018-02-22 15:53:16.279191] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-2: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.282126] I [MSGID: 114057] > [client-handshake.c:1478:select_server_supported_programs] > 0-virtimages-client-0: Using Program GlusterFS 3.3, Num (1298437), > Version (330) > [2018-02-22 15:53:16.282573] I [MSGID: 114046] > [client-handshake.c:1231:client_setvolume_cbk] 0-virtimages-client-0: > Connected to virtimages-client-0, attached to remote volume > '/mnt/virtimages/virtimgs'. > [2018-02-22 15:53:16.282584] I [MSGID: 114047] > [client-handshake.c:1242:client_setvolume_cbk] 0-virtimages-client-0: > Server and Client lk-version numbers are not same, reopening the fds > [2018-02-22 15:53:16.282665] I [MSGID: 108005] > [afr-common.c:4929:__afr_handle_child_up_event] > 0-virtimages-replicate-0: Subvolume 'virtimages-client-0' came back > up; going online. > [2018-02-22 15:53:16.282877] I [rpc-clnt.c:1986:rpc_clnt_reconfig] > 0-virtimages-client-1: changing port to 49152 (from 0) > [2018-02-22 15:53:16.282934] I [MSGID: 114035] > [client-handshake.c:202:client_set_lk_version_cbk] > 0-virtimages-client-0: Server lk version = 1 > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Recogizer Group GmbH Dr.rer.nat. Erekle Magradze Lead Big Data Engineering & DevOps Rheinwerkallee 2, 53227 Bonn Tel: +49 228 29974555 E-Mail erekle.magradze at recogizer.de recogizer.com ----------------------------------------------------------------- Recogizer Group GmbH Gesch?ftsf?hrer: Oliver Habisch, Carsten Kreutze Handelsregister: Amtsgericht Bonn HRB 20724 Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993 Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und l?schen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der darin enthaltenen Informationen ist nicht gestattet. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From arsene.gschwind at unibas.ch Fri Feb 23 19:14:22 2018 From: arsene.gschwind at unibas.ch (=?UTF-8?Q?Ars=c3=a8ne_Gschwind?=) Date: Fri, 23 Feb 2018 20:14:22 +0100 Subject: [ovirt-users] After upgrade to 4.2 some VM won't start Message-ID: Hi, After upgrading cluster compatibility to 4.2 some VM won't start and I'm unable to figured out why, it throws a java exception. I've attached the engine log. Thanks for any help/hint. rgds, Arsene -- *Ars?ne Gschwind* Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland Tel. +41 79 449 25 63? | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: engine.log.gz Type: application/gzip Size: 2714989 bytes Desc: not available URL: From dholler at redhat.com Fri Feb 23 21:30:46 2018 From: dholler at redhat.com (Dominik Holler) Date: Fri, 23 Feb 2018 22:30:46 +0100 Subject: [ovirt-users] ovn related events every 5 minutes on 4.2.1 In-Reply-To: References: Message-ID: <20180223223046.1d43eadf@t460p.fritz.box> On Fri, 23 Feb 2018 14:57:05 +0100 Gianluca Cecchi wrote: > Hello, > in my events pane of 4.2.1 I see, every 5 minutes, this event > > Networks of Provider ovirt-provider-ovn were successfully > synchronized. > > that fills so my table and prevent easy reading of other ones... > Can I disable or "relax" this? > Hello Gianluca, please find [1] if you want to disable this automatic background synchronization. If you want to change the interval to 2h, you can do it like this engine-config -s ExternalNetworkProviderSynchronizationRate=7200 Dominik [1] https://gist.github.com/dominikholler/ed372e368d734a00cfc71e19b6ef5463 From mahdi.adnan at outlook.com Fri Feb 23 19:41:30 2018 From: mahdi.adnan at outlook.com (Mahdi Adnan) Date: Fri, 23 Feb 2018 19:41:30 +0000 Subject: [ovirt-users] After upgrade to 4.2 some VM won't start In-Reply-To: References: Message-ID: All VMs are using the same storage domain ? -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces at ovirt.org on behalf of Ars?ne Gschwind Sent: Friday, February 23, 2018 10:14 PM To: users Subject: [ovirt-users] After upgrade to 4.2 some VM won't start Hi, After upgrading cluster compatibility to 4.2 some VM won't start and I'm unable to figured out why, it throws a java exception. I've attached the engine log. Thanks for any help/hint. rgds, Arsene -- Ars?ne Gschwind Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70 | CH-4056 Basel | Switzerland Tel. +41 79 449 25 63 | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From aristos at aristos.net Sat Feb 24 06:10:00 2018 From: aristos at aristos.net (Aristos Vasiliou) Date: Sat, 24 Feb 2018 08:10:00 +0200 Subject: [ovirt-users] problem adding new host to ovirt 4.2 Message-ID: <032f01d3ad36$1cb3eff0$561bcfd0$@aristos.net> Hi, I've set up a couple of machines to test out ovirt. 1. centos 7 machine running ovirt 4.2 (kvm-manager) 2. centos 7 machine running libvirt (kvm-server) Using the ovirt web interface, I am trying to add a host (machine number 2). I define the IP, user, pass, click OK, and again OK, confirming I don't want to use power management. The status of the new host is now "Installing" and after a few seconds becomes "Install failed" I have a couple of error messages in the Events tab: - An error has occurred during installation of Host kvm-server: Failed to execute stage 'Setup validation': Cannot locate ovirt-host package, possible cause is incorrect channels. - Host kvm-server installation failed. Command returned failure code 1 during SSH session 'root at kvm-server.home.local'. What I'm I doing wrong here? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From arsene.gschwind at unibas.ch Sat Feb 24 07:10:32 2018 From: arsene.gschwind at unibas.ch (=?UTF-8?Q?Ars=c3=a8ne_Gschwind?=) Date: Sat, 24 Feb 2018 08:10:32 +0100 Subject: [ovirt-users] After upgrade to 4.2 some VM won't start In-Reply-To: References: Message-ID: Yes all using the same SD. rgds, Arsene On 02/23/2018 08:41 PM, Mahdi Adnan wrote: > All VMs are using the same storage domain ? > > > -- > > Respectfully* > **Mahdi A. Mahdi* > > ------------------------------------------------------------------------ > *From:* users-bounces at ovirt.org on behalf of > Ars?ne Gschwind > *Sent:* Friday, February 23, 2018 10:14 PM > *To:* users > *Subject:* [ovirt-users] After upgrade to 4.2 some VM won't start > > Hi, > > After upgrading cluster compatibility to 4.2 some VM won't start and > I'm unable to figured out why, it throws a java exception. > > I've attached the engine log. > > Thanks for any help/hint. > > rgds, > Arsene > > -- > > *Ars?ne Gschwind* > Fa. Sapify AG im Auftrag der Universit?t Basel > IT Services > Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland > Tel. +41 79 449 25 63? | http://its.unibas.ch > ITS-ServiceDesk: support-its at unibas.ch > | +41 61 267 14 11 > -- *Ars?ne Gschwind* Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland Tel. +41 79 449 25 63? | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From jvdwege at xs4all.nl Sat Feb 24 07:32:15 2018 From: jvdwege at xs4all.nl (Joop van de Wege) Date: Sat, 24 Feb 2018 08:32:15 +0100 Subject: [ovirt-users] problem adding new host to ovirt 4.2 In-Reply-To: <032f01d3ad36$1cb3eff0$561bcfd0$@aristos.net> References: <032f01d3ad36$1cb3eff0$561bcfd0$@aristos.net> Message-ID: <1A44B6F9-9B7C-4B00-82B6-DA7FED04D6D0@xs4all.nl> On February 24, 2018 7:10:00 AM GMT+01:00, Aristos Vasiliou wrote: >Hi, > > > >I've set up a couple of machines to test out ovirt. > > > >1. centos 7 machine running ovirt 4.2 (kvm-manager) > >2. centos 7 machine running libvirt (kvm-server) > > > >Using the ovirt web interface, I am trying to add a host (machine >number 2). >I define the IP, user, pass, click OK, and again OK, confirming I don't >want >to use power management. The status of the new host is now "Installing" >and >after a few seconds becomes "Install failed" > > > >I have a couple of error messages in the Events tab: > > > >- An error has occurred during installation of Host >kvm-server: >Failed to execute stage 'Setup validation': Cannot locate ovirt-host >package, possible cause is incorrect channels. > >- Host kvm-server installation failed. Command returned >failure >code 1 during SSH session 'root at kvm-server.home.local'. > Looks like it's missing the ovirt repo on the kvm server. Regards, Jooo From arsene.gschwind at unibas.ch Sat Feb 24 08:03:55 2018 From: arsene.gschwind at unibas.ch (=?UTF-8?Q?Ars=c3=a8ne_Gschwind?=) Date: Sat, 24 Feb 2018 09:03:55 +0100 Subject: [ovirt-users] After upgrade to 4.2 some VM won't start In-Reply-To: References: Message-ID: When creating an identical VM and attaching the one disk it will start and run perfectly. It seems that during the Cluster Compatibility Update something doesn't work right on running VM, this only happens on running VMs and I could reproduce it. Is there a way to do some kind of diff between the new and the old VM settings to find out what may be different? Thanks, Arsene On 02/23/2018 08:14 PM, Ars?ne Gschwind wrote: > > Hi, > > After upgrading cluster compatibility to 4.2 some VM won't start and > I'm unable to figured out why, it throws a java exception. > > I've attached the engine log. > > Thanks for any help/hint. > > rgds, > Arsene > > -- > > *Ars?ne Gschwind* > Fa. Sapify AG im Auftrag der Universit?t Basel > IT Services > Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland > Tel. +41 79 449 25 63? | http://its.unibas.ch > ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- *Ars?ne Gschwind* Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland Tel. +41 79 449 25 63? | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alessandro.DeSalvo at roma1.infn.it Sat Feb 24 13:32:10 2018 From: Alessandro.DeSalvo at roma1.infn.it (Alessandro De Salvo) Date: Sat, 24 Feb 2018 14:32:10 +0100 Subject: [ovirt-users] Hosted Engine VM not imported Message-ID: <155ec691-445f-ec0b-9083-7236dd868b9f@roma1.infn.it> Hi, I have just migrated my dev cluster to the latest master, reinstalling the engine VM and reimporting from a previous backup. I'm trying with 4.3.0-0.0.master.20180222192611.git01e6ace.el7.centos I had a few problems: - the documentation seems to be outdated, and I just find by searching the archives that it's needed to add the two (undocumented) options --he-remove-storage-vm --he-remove-hosts - despite the fact I selected "No" to running the engine-setup command in the VM (the ovirt appliance), the engine-setup is executed when running hosted-engine --deploy, and as a result the procedure does not stop allowing to reload the db backup. The only way I found was to put the hosted-engine in global maintenance mode, stop the ovirt-engine, do an engine-cleanup and reload the db, then it's possible to add the first host in the GUI, but must be done manually - after it's all done, I can see the hosted_storage is imported, but the HostedEngine is not imported, and in the Events I see messages like this: VDSM atlas-svc-18 command GetVolumeInfoVDS failed: Image path does not exist or cannot be accessed/created: (u'/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31-4b08-81be-e515e803edce',) ?? the path here is clearly wrong, it should be /rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/b7bc6468-438c-47e7-b7a4-7ed06b786da0/c815ec3f-6e31-4b08-81be-e515e803edce, and I see the hosted_engine.conf in the shared storage has it correctly set as vm_disk_id=b7bc6468-438c-47e7-b7a4-7ed06b786da0. Any hint on what is not allowing the HostedEngine to be imported? I didn't find a way to add other hosted engine nodes if the HE VM is not imported in the cluster, like we were used in the past with the CLI using hosted-engine --deploy on multiple hosts. Thanks for any help, ??? Alessandro From mahdi.adnan at outlook.com Fri Feb 23 17:15:55 2018 From: mahdi.adnan at outlook.com (Mahdi Adnan) Date: Fri, 23 Feb 2018 17:15:55 +0000 Subject: [ovirt-users] rebooting hypervisors from time to time In-Reply-To: <25891cac-5d5a-8dac-421d-e8e19277d3de@recogizer.de> References: , <25891cac-5d5a-8dac-421d-e8e19277d3de@recogizer.de> Message-ID: Hi, The log does't indicate HV reboot, and i see lots of errors in the logs. During the reboot, what happened to the VM inside of the HV ? migrated ? paused ? what about the system's logs ? does it indicate a graceful shutdown ? -- Respectfully Mahdi A. Mahdi ________________________________ From: Erekle Magradze Sent: Friday, February 23, 2018 2:48 PM To: Mahdi Adnan; users at ovirt.org Subject: Re: [ovirt-users] rebooting hypervisors from time to time Thanks for the reply, I've attached all the logs from yesterday, reboot has happened during the day but this is not the first time and this is not the only one hypervisor. Kind Regards Erekle On 02/23/2018 09:00 AM, Mahdi Adnan wrote: Hi, Can you post the VDSM and Engine logs ? -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces at ovirt.org on behalf of Erekle Magradze Sent: Thursday, February 22, 2018 11:48 PM To: users at ovirt.org Subject: Re: [ovirt-users] rebooting hypervisors from time to time Dear all, It would be great if someone will share any experience regarding the similar case, would be great to have a hint where to start investigation. Thanks again Cheers Erekle On 02/22/2018 05:05 PM, Erekle Magradze wrote: > Hello there, > > I am facing the following problem from time to time one of the > hypervisor (there are 3 of them)s is rebooting, I am using > ovirt-release42-4.2.1-1.el7.centos.noarch and glsuter as a storage > backend (glusterfs-3.12.5-2.el7.x86_64). > > I am suspecting gluster because of the e.g. message bellow from one of > the volumes, > > Could you please help and suggest to which direction should > investigation go? > > Thanks in advance > > Cheers > > Erekle > > > [2018-02-22 15:36:10.011687] and [2018-02-22 15:37:10.955013] > [2018-02-22 15:41:10.198701] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:41:10.198704] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:42:11.293608] I [MSGID: 109063] > [dht-layout.c:716:dht_layout_normalize] 0-virtimages-dht: Found > anomalies in (null) (gfid = 00000000-0000-0000-0000-000000000000). > Holes=1 overlaps=0 > [2018-02-22 15:53:16.245720] I [MSGID: 100030] > [glusterfsd.c:2524:main] 0-/usr/sbin/glusterfs: Started running > /usr/sbin/glusterfs version 3.12.5 (args: /usr/sbin/glusterfs > --volfile-server=10.0.0.21 --volfi > le-server=10.0.0.22 --volfile-server=10.0.0.23 > --volfile-id=/virtimages > /rhev/data-center/mnt/glusterSD/10.0.0.21:_virtimages) > [2018-02-22 15:53:16.263712] W [MSGID: 101002] > [options.c:995:xl_opt_validate] 0-glusterfs: option 'address-family' > is deprecated, preferred is 'transport.address-family', continuing > with correction > [2018-02-22 15:53:16.269595] I [MSGID: 101190] > [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started > thread with index 1 > [2018-02-22 15:53:16.273483] I [MSGID: 101190] > [event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started > thread with index 2 > [2018-02-22 15:53:16.273594] W [MSGID: 101174] > [graph.c:363:_log_if_unknown_option] 0-virtimages-readdir-ahead: > option 'parallel-readdir' is not recognized > [2018-02-22 15:53:16.273703] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-0: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.276455] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-1: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.276683] I [rpc-clnt.c:1986:rpc_clnt_reconfig] > 0-virtimages-client-0: changing port to 49152 (from 0) > [2018-02-22 15:53:16.279191] I [MSGID: 114020] [client.c:2360:notify] > 0-virtimages-client-2: parent translators are ready, attempting > connect on transport > [2018-02-22 15:53:16.282126] I [MSGID: 114057] > [client-handshake.c:1478:select_server_supported_programs] > 0-virtimages-client-0: Using Program GlusterFS 3.3, Num (1298437), > Version (330) > [2018-02-22 15:53:16.282573] I [MSGID: 114046] > [client-handshake.c:1231:client_setvolume_cbk] 0-virtimages-client-0: > Connected to virtimages-client-0, attached to remote volume > '/mnt/virtimages/virtimgs'. > [2018-02-22 15:53:16.282584] I [MSGID: 114047] > [client-handshake.c:1242:client_setvolume_cbk] 0-virtimages-client-0: > Server and Client lk-version numbers are not same, reopening the fds > [2018-02-22 15:53:16.282665] I [MSGID: 108005] > [afr-common.c:4929:__afr_handle_child_up_event] > 0-virtimages-replicate-0: Subvolume 'virtimages-client-0' came back > up; going online. > [2018-02-22 15:53:16.282877] I [rpc-clnt.c:1986:rpc_clnt_reconfig] > 0-virtimages-client-1: changing port to 49152 (from 0) > [2018-02-22 15:53:16.282934] I [MSGID: 114035] > [client-handshake.c:202:client_set_lk_version_cbk] > 0-virtimages-client-0: Server lk version = 1 > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- Recogizer Group GmbH Dr.rer.nat. Erekle Magradze Lead Big Data Engineering & DevOps Rheinwerkallee 2, 53227 Bonn Tel: +49 228 29974555 E-Mail erekle.magradze at recogizer.de recogizer.com ----------------------------------------------------------------- Recogizer Group GmbH Gesch?ftsf?hrer: Oliver Habisch, Carsten Kreutze Handelsregister: Amtsgericht Bonn HRB 20724 Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993 Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und l?schen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der darin enthaltenen Informationen ist nicht gestattet. _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Recogizer Group GmbH Dr.rer.nat. Erekle Magradze Lead Big Data Engineering & DevOps Rheinwerkallee 2, 53227 Bonn Tel: +49 228 29974555 E-Mail erekle.magradze at recogizer.de recogizer.com ----------------------------------------------------------------- Recogizer Group GmbH Gesch?ftsf?hrer: Oliver Habisch, Carsten Kreutze Handelsregister: Amtsgericht Bonn HRB 20724 Sitz der Gesellschaft: Bonn; USt-ID-Nr.: DE294195993 Diese E-Mail enth?lt vertrauliche und/oder rechtlich gesch?tzte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrt?mlich erhalten haben, informieren Sie bitte sofort den Absender und l?schen Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail und der darin enthaltenen Informationen ist nicht gestattet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From arsene.gschwind at unibas.ch Sat Feb 24 14:01:50 2018 From: arsene.gschwind at unibas.ch (=?UTF-8?Q?Ars=c3=a8ne_Gschwind?=) Date: Sat, 24 Feb 2018 15:01:50 +0100 Subject: [ovirt-users] After upgrade to 4.2 some VM won't start In-Reply-To: References: Message-ID: <78f64423-70b6-c161-d606-9318dfbccf65@unibas.ch> Yes exactly On 02/24/2018 11:58 AM, Mahdi Adnan wrote: > So if you create new VM and attach the same disk to it, it will run > without issues?? > > > -- > > Respectfully* > **Mahdi A. Mahdi* > > ------------------------------------------------------------------------ > *From:* users-bounces at ovirt.org on behalf of > Ars?ne Gschwind > *Sent:* Saturday, February 24, 2018 11:03 AM > *To:* users at ovirt.org > *Subject:* Re: [ovirt-users] After upgrade to 4.2 some VM won't start > > When creating an identical VM and attaching the one disk it will start > and run perfectly. It seems that during the Cluster Compatibility > Update something doesn't work right on running VM, this only happens > on running VMs and I could reproduce it. > > Is there a way to do some kind of diff between the new and the old VM > settings to find out what may be different? > > Thanks, > Arsene > > > On 02/23/2018 08:14 PM, Ars?ne Gschwind wrote: >> >> Hi, >> >> After upgrading cluster compatibility to 4.2 some VM won't start and >> I'm unable to figured out why, it throws a java exception. >> >> I've attached the engine log. >> >> Thanks for any help/hint. >> >> rgds, >> Arsene >> >> -- >> >> *Ars?ne Gschwind* >> Fa. Sapify AG im Auftrag der Universit?t Basel >> IT Services >> Klingelbergstr. 70?|? CH-4056 Basel? | Switzerland >> Tel. +41 79 449 25 63? | http://its.unibas.ch >> ITS-ServiceDesk: support-its at unibas.ch >> | +41 61 267 14 11 >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > -- > > *Ars?ne Gschwind* > Fa. Sapify AG im Auftrag der Universit?t Basel > IT Services > Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland > Tel. +41 79 449 25 63? | http://its.unibas.ch > ITS-ServiceDesk: support-its at unibas.ch > | +41 61 267 14 11 > -- *Ars?ne Gschwind* Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland Tel. +41 79 449 25 63? | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From geoffrsweet+ovirtusers at gmail.com Sun Feb 25 02:13:02 2018 From: geoffrsweet+ovirtusers at gmail.com (Geoff Sweet) Date: Sat, 24 Feb 2018 18:13:02 -0800 Subject: [ovirt-users] API endpoint for a VM to fetch metadata about itself Message-ID: Is there an API endpoint that VM's can query to discover it's oVirt metadata? Something similar to AWS's http://169.254.169.254/latest/ meta-data/ query in EC2? I'm trying to stitch a lot of automation workflow together and so far I have had great luck with oVirt. But the next small hurdle is to figure out how all the post-install setup stuff can figure out who the VM is so it can the appropriate configurations. Thanks! -Geoff -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahdi.adnan at outlook.com Sat Feb 24 10:58:32 2018 From: mahdi.adnan at outlook.com (Mahdi Adnan) Date: Sat, 24 Feb 2018 10:58:32 +0000 Subject: [ovirt-users] After upgrade to 4.2 some VM won't start In-Reply-To: References: , Message-ID: So if you create new VM and attach the same disk to it, it will run without issues ? -- Respectfully Mahdi A. Mahdi ________________________________ From: users-bounces at ovirt.org on behalf of Ars?ne Gschwind Sent: Saturday, February 24, 2018 11:03 AM To: users at ovirt.org Subject: Re: [ovirt-users] After upgrade to 4.2 some VM won't start When creating an identical VM and attaching the one disk it will start and run perfectly. It seems that during the Cluster Compatibility Update something doesn't work right on running VM, this only happens on running VMs and I could reproduce it. Is there a way to do some kind of diff between the new and the old VM settings to find out what may be different? Thanks, Arsene On 02/23/2018 08:14 PM, Ars?ne Gschwind wrote: Hi, After upgrading cluster compatibility to 4.2 some VM won't start and I'm unable to figured out why, it throws a java exception. I've attached the engine log. Thanks for any help/hint. rgds, Arsene -- Ars?ne Gschwind Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70 | CH-4056 Basel | Switzerland Tel. +41 79 449 25 63 | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -- Ars?ne Gschwind Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70 | CH-4056 Basel | Switzerland Tel. +41 79 449 25 63 | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From plord at intricatenetworks.com Sun Feb 25 06:11:08 2018 From: plord at intricatenetworks.com (Zip) Date: Sun, 25 Feb 2018 00:11:08 -0600 Subject: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? In-Reply-To: <2485548.xbxGyNV15t@awels> References: <2485548.xbxGyNV15t@awels> Message-ID: Hi Alexander, If I try the following: I get the error in my browser console: Sun Feb 25 00:03:56 GMT-600 2018 org.ovirt.engine.ui.webadmin.plugin.PluginManager SEVERE: Exception caught while invoking event handler function [UiInit] for plugin [HelloWorld]: Error: java.lang.IndexOutOfBoundsException webadmin:1:13517 Sun Feb 25 00:03:56 GMT-600 2018 org.ovirt.engine.ui.webadmin.plugin.PluginManager WARNING: Plugin [HelloWorld] removed from service due to failure However if I remove the line: api.addMainTab('FooTab','xtab123','http://foo.com/?); And replace it with something simple like: alert(?Test 123?); There are no errors and the alert fires as it should. Any ideas of what I might be missing? I am running oVirt 4.2.1 on CentOS ? Hosted Engine setup with 1 host for testing. Thanks Zip > > From: Alexander Wels > Date: Monday, February 19, 2018 at 7:54 AM > To: "users at ovirt.org" > Cc: Preston > Subject: Re: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? > > On Friday, February 16, 2018 6:31:10 PM EST Zip wrote: >> Are there any updated docs for the WebUI Plugins API? >> > > Unfortunately no, I haven't had a chance to create updated documentation. > However the first two links are mostly still accurate as we haven't done any > major changes to the API. > > Some things to note that are different from the API documentation in https:// > www.ovirt.org/develop/release-management/features/ux/uiplugins/ for 4.2: > > - alignRight no longer has any effect, as the UI in 4.2 no longer respects it. > - none of the systemTreeNode selection code does anything (since there is no > more system tree) > - As noted in the documentation itself the RestApiSessionAcquired is no longer > available as we have a proper SSO mechanism that you can utilize at this > point. > - Main Tabs are now called Main Views (but the api still calls them main tabs, > so use the apis described). And sub tabs are now called detail tabs, but the > same thing the API hasn't changed the naming convention so use subTabs. > - mainTabActionButton location property no longer has any meaning and is > ignored. > > That is it I think, we tried to make it so existing plugins would remain > working even if some options no longer mean anything. > >> I have found the following which all appear to be old and no longer working? >> >> https://www.ovirt.org/documentation/admin-guide/appe-oVirt_User_Interface_Pl >> ugins/ >> https://www.ovirt.org/develop/release-management/features/ux/uiplugins/ >> http://resources.ovirt.org/old-site-files/UI_Plugins_at_oVirt_Workshop_Sunny >> vale_2013.pdf >> >> Thanks >> >> Zip > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scotth at sourcemirrors.org Sun Feb 25 06:53:02 2018 From: scotth at sourcemirrors.org (Scott Harvanek) Date: Sun, 25 Feb 2018 00:53:02 -0600 Subject: [ovirt-users] Power Management - Supermicro SuperBlade Message-ID: Hoping someone can help here, I've looked and can't find any examples on this. I've got some SuperBlade chassis and the blades are managed via the chassis controller. What is the proper way to configure power management then via the controller? You can control individual blades via the SMCIPMItool but I'm not entirely sure how to configure that inside of Ovirt for power management, does anyone have any experience on this or can point me to some good docs? Cheers! Scott H. -------------- next part -------------- An HTML attachment was scrubbed... URL: From poltsi at poltsi.fi Sun Feb 25 07:55:56 2018 From: poltsi at poltsi.fi (=?UTF-8?Q?Paul-Erik_T=C3=B6rr=C3=B6nen?=) Date: Sun, 25 Feb 2018 09:55:56 +0200 Subject: [ovirt-users] Confused by logical networks In-Reply-To: References: <0a60e95142e6577b4e210ed814d495c7@webmail.poltsi.fi> Message-ID: On 2018-02-23 08:41, Yaniv Kaul wrote: > There's no reason really to assign IPs to hosts on the logical network. Ah yes, you're correct. I was using the physical host as a GW so that the VMs on the logical network would have an access point to outside (like the CentOS repositories). Anyways, I found the cause of my confusion in the end. The switch was not properly set up (*ahemm* forgot to save configuration). 3 thumbs up for oVirt showing the LLDP-info of the switch :-) And the fact that despite having to reinstall essentially everything oVirt from scratch (due to failed upgrade), the VMs were just a matter if import. Splendid work! Poltsi From sleviim at redhat.com Sun Feb 25 12:44:48 2018 From: sleviim at redhat.com (Shani Leviim) Date: Sun, 25 Feb 2018 14:44:48 +0200 Subject: [ovirt-users] restore snapshot cannot restore memory In-Reply-To: <4c04494d.7cd2.161c3699e9b.Coremail.yxpengi386@163.com> References: <4c04494d.7cd2.161c3699e9b.Coremail.yxpengi386@163.com> Message-ID: Hi, Can you please attach full engine and vdsm logs? *Regards,* *Shani Leviim* On Fri, Feb 23, 2018 at 6:05 PM, pengyixiang wrote: > hello > I found if we retore snapshot, memory cannot be restored, I test it > with ovirt-4.1.2?vdsm-4.17.0 and libvirt-3.0.0, > and i get some errors in [1],it seems vm not paused in creating snapshot, > but self._underlyingCont() called in vm starting, > so error occurs, then vm is started in libvirt but shutdowned in vdsm, > changes in [2], then it works well. > > > [1] > 2018-02-12 19:39:23,830+0800 ERROR (vm/d7be0fde) [virt.vm] > (vmId='d7be0fde-f9b9-4447-a250-2453482faef9') The vm start process failed > (vm:662) > Traceback (most recent call last): > File "/usr/share/vdsm/virt/vm.py", line 607, in _startUnderlyingVm > self._completeIncomingMigration() > File "/usr/share/vdsm/virt/vm.py", line 3268, in > _completeIncomingMigration > self.cont() > File "/usr/share/vdsm/virt/vm.py", line 1128, in cont > self._underlyingCont() > File "/usr/share/vdsm/virt/vm.py", line 3368, in _underlyingCont > self._dom.resume() > File "/usr/lib/python2.7/dist-packages/vdsm/virt/virdomain.py", line > 69, in f > ret = attr(*args, **kwargs) > File "/usr/lib/python2.7/dist-packages/vdsm/libvirtconnection.py", line > 123, in wrapper > ret = f(*args, **kwargs) > File "/usr/lib/python2.7/dist-packages/vdsm/utils.py", line 926, in > wrapper > return func(inst, *args, **kwargs) > File "/usr/lib/python2.7/dist-packages/libvirt.py", line 1905, in resume > if ret == -1: raise libvirtError ('virDomainResume() failed', dom=self) > libvirtError: Requested operation is not valid: domain is already running > > [2] > --- a/Linx_Node/node_iso/install_script/py/vdsm/vdsm/virt/vm.py > +++ b/Linx_Node/node_iso/install_script/py/vdsm/vdsm/virt/vm.py > @@ -3677,6 +3677,8 @@ class Vm(object): > else: > snapFlags |= libvirt.VIR_DOMAIN_SNAPSHOT_CREATE_DISK_ONLY > > + self._underlyingPause() > + > # When creating memory snapshot libvirt will pause the vm > should_freeze = not (memoryParams or frozen) > > @@ -3734,6 +3736,8 @@ class Vm(object): > if memoryParams: > self.cif.teardownVolumePath(memoryVol) > > + self._underlyingCont() > + > # Returning quiesce to notify the manager whether the guest agent > # froze and flushed the filesystems or not. > quiesce = should_freeze and freezed["status"]["code"] == 0 > > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gianluca.cecchi at gmail.com Sun Feb 25 13:25:03 2018 From: gianluca.cecchi at gmail.com (Gianluca Cecchi) Date: Sun, 25 Feb 2018 14:25:03 +0100 Subject: [ovirt-users] ovn related events every 5 minutes on 4.2.1 In-Reply-To: <20180223223046.1d43eadf@t460p.fritz.box> References: <20180223223046.1d43eadf@t460p.fritz.box> Message-ID: On Fri, Feb 23, 2018 at 10:30 PM, Dominik Holler wrote: > On Fri, 23 Feb 2018 14:57:05 +0100 > Gianluca Cecchi wrote: > > > Hello, > > in my events pane of 4.2.1 I see, every 5 minutes, this event > > > > Networks of Provider ovirt-provider-ovn were successfully > > synchronized. > > > > that fills so my table and prevent easy reading of other ones... > > Can I disable or "relax" this? > > > > Hello Gianluca, > please find [1] if you want to disable this automatic background > synchronization. > If you want to change the interval to 2h, you can do it like this > engine-config -s ExternalNetworkProviderSynchronizationRate=7200 > Dominik > > [1] > https://gist.github.com/dominikholler/ed372e368d734a00cfc71e19b6ef5463 > Ok, thanks. I will try. But, apart from the feedback, I don't understand the need of this kind of background synchronization either... Gianluca -------------- next part -------------- An HTML attachment was scrubbed... URL: From sleviim at redhat.com Sun Feb 25 13:26:41 2018 From: sleviim at redhat.com (Shani Leviim) Date: Sun, 25 Feb 2018 15:26:41 +0200 Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> Message-ID: Hi Lionel, You can try to delete that snapshot directly from the database. In case of using psql [1], once you've logged in to your database, you can run this query: $ select * from snapshots where vm_id = ''; This one would list the snapshots associated with a VM by its id. In case you don't have you vm_id, you can locate it by querying: $ select * from vms where vm_name = 'nil'; This one would show you some details about a VM by its name (including the vm's id). Once you've found the relevant snapshot, you can delete it by running: $ delete from snapshots where snapshot_id = ''; This one would delete the desired snapshot from the database. Since it's a delete operation, I would suggest confirming the ids before executing it. Hope you've found it useful! [1] https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_Remote_PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ *Regards,* *Shani Leviim* On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec wrote: > Hi, > > i've a problem with snapshot. On one VM i've a "snapshot" ghost without > name or uuid, only information is size (see attachment). In the snapshot > tab there is no trace about this disk. > > In database (table images) i found this : > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > | 2 | 4 | 17e26476-cecb-441d-a5f7-46ab3ef387ee | > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f | > 1 | 2 > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > | 2 | 4 | bf834a91-c69f-4d2c-b639-116ed58296d8 | > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f | > 1 | 2 > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > But i does not know which line is my disk. Is it possible to delete > directly into database? > Or is it better to dump my disk to another new and delete the "corrupted > one"? > > Another thing, when i try to move the disk to another storage domain i > always get "uncaght exeption occured ..." and no error in engine.log. > > > Thank you for helping. > > -- > Lionel Caignec > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ylavi at redhat.com Sun Feb 25 14:00:33 2018 From: ylavi at redhat.com (Yaniv Lavi) Date: Sun, 25 Feb 2018 16:00:33 +0200 Subject: [ovirt-users] oVirt Survey 2018 results In-Reply-To: References: Message-ID: Some notes that may help the users based on the replies on the survey (based on latest 4.2): - ISO upload to storage domain is supported (UI/API). Can be used just like disk upload and it supports quota based on the disk used. - You now have partial incremental backup support with download/upload API (at long as the snapshot still exists): https://ovirt.org/develop/release-management/features/storage/backup-restore-disk-snapshots/ - You can now download/upload VMs from data storage domains. - VM not keep their compatibility version after cluster version upgrade, so it is ok to not restart in upgrade (at least until CentOS 8). - Ceph can be used with ISCSI target with oVirt. - macspoofing hook is not really needed in oVirt 4.x, this is a supported feature that can be used in vNIC profiles. - Some of the users change migration settings in VDSM. This is not really needed with oVirt 4.2 and migration policies which less error prone and more advanced. Hope this helps some of you. Thanks! YANIV LAVI SENIOR TECHNICAL PRODUCT MANAGER Red Hat Israel Ltd. 34 Jerusalem Road, Building A, 1st floor Ra'anana, Israel 4350109 ylavi at redhat.com T: +972-9-7692306/8272306 F: +972-9-7692223 IM: ylavi TRIED. TESTED. TRUSTED. @redhatnews Red Hat Red Hat On Fri, Feb 2, 2018 at 11:22 AM, Sandro Bonazzola wrote: > Thank you very much for having participated in oVirt Survey 2018! > Results are now publicly available at http://bit.ly/2Ez909d > We're now analyzing results for 4.3 planning. > > -- > > SANDRO BONAZZOLA > > ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D > > Red Hat EMEA > > TRIED. TESTED. TRUSTED. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ylavi at redhat.com Sun Feb 25 14:13:07 2018 From: ylavi at redhat.com (Yaniv Lavi) Date: Sun, 25 Feb 2018 16:13:07 +0200 Subject: [ovirt-users] oVirt Survey 2018 results In-Reply-To: References: Message-ID: YANIV LAVI SENIOR TECHNICAL PRODUCT MANAGER Red Hat Israel Ltd. 34 Jerusalem Road, Building A, 1st floor Ra'anana, Israel 4350109 ylavi at redhat.com T: +972-9-7692306/8272306 F: +972-9-7692223 IM: ylavi TRIED. TESTED. TRUSTED. @redhatnews Red Hat Red Hat On Sun, Feb 25, 2018 at 4:00 PM, Yaniv Lavi wrote: > Some notes that may help the users based on the replies on the survey > (based on latest 4.2): > > - ISO upload to storage domain is supported (UI/API). Can be used just > like disk upload and it supports quota based on the disk used. > > - You now have partial incremental backup support with download/upload API > (at long as the snapshot still exists): > https://ovirt.org/develop/release-management/features/ > storage/backup-restore-disk-snapshots/ > > - You can now download/upload VMs from data storage domains. > > Correction: > - VM keep their compatibility version after cluster version upgrade, so it > is ok to not restart in upgrade (at least until CentOS 8). > > - Ceph can be used with ISCSI target with oVirt. > > - macspoofing hook is not really needed in oVirt 4.x, this is a supported > feature that can be used in vNIC profiles. > > - Some of the users change migration settings in VDSM. This is not really > needed with oVirt 4.2 and migration policies which less error prone and > more advanced. > > Hope this helps some of you. > > > Thanks! > > > YANIV LAVI > > SENIOR TECHNICAL PRODUCT MANAGER > > Red Hat Israel Ltd. > > 34 Jerusalem Road, Building A, 1st floor > > Ra'anana, Israel 4350109 > > ylavi at redhat.com T: +972-9-7692306/8272306 F: +972-9-7692223 IM: ylavi > TRIED. TESTED. TRUSTED. > @redhatnews Red Hat Red Hat > > > On Fri, Feb 2, 2018 at 11:22 AM, Sandro Bonazzola > wrote: > >> Thank you very much for having participated in oVirt Survey 2018! >> Results are now publicly available at http://bit.ly/2Ez909d >> We're now analyzing results for 4.3 planning. >> >> -- >> >> SANDRO BONAZZOLA >> >> ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D >> >> Red Hat EMEA >> >> TRIED. TESTED. TRUSTED. >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From omachace at redhat.com Mon Feb 26 07:05:24 2018 From: omachace at redhat.com (Ondra Machacek) Date: Mon, 26 Feb 2018 08:05:24 +0100 Subject: [ovirt-users] API endpoint for a VM to fetch metadata about itself In-Reply-To: References: Message-ID: <1e774133-bc56-544f-cf49-71620571a128@redhat.com> We don't have any such resource. We have those information in different places of the API. For example to find the information about devices of the VM, like network device information (IP address, MAC, etc), you can query: /ovirt-engine/api/vms/{vm_id}/reporteddevices The FQDN is listed right in the basic information of the VM quering the VM itself: /ovirt-engine/api/vms/{vm_id} You can find all the information about specific attributes returned by the API here in the documentation: http://ovirt.github.io/ovirt-engine-api-model/4.2/#types/vm On 02/25/2018 03:13 AM, Geoff Sweet wrote: > Is there an API endpoint that VM's can query to discover it's oVirt > metadata? Something similar to AWS's > http://169.254.169.254/latest/meta-data/ > query in EC2? I'm trying to > stitch a lot of automation workflow together and so far I have had great > luck with oVirt. But the next small hurdle is to figure out how all the > post-install setup stuff can figure out who the VM is so it can the > appropriate configurations. > > Thanks! > -Geoff > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From caignec at cines.fr Mon Feb 26 07:20:00 2018 From: caignec at cines.fr (Lionel Caignec) Date: Mon, 26 Feb 2018 08:20:00 +0100 (CET) Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> Message-ID: <1126422114.1812662.1519629600204.JavaMail.zimbra@cines.fr> Hi Shani, thank you for helping me with your reply, i juste make a little mistake on explanation. In fact it's the snapshot does not exist anymore. This is the disk(s) relative to her wich still exist, and perhaps LVM volume. So can i delete manually this disk in database? what about the lvm volume? Is it better to recreate disk sync data and destroy old one? ----- Mail original ----- De: "Shani Leviim" ?: "Lionel Caignec" Cc: "users" Envoy?: Dimanche 25 F?vrier 2018 14:26:41 Objet: Re: [ovirt-users] Ghost Snapshot Disk Hi Lionel, You can try to delete that snapshot directly from the database. In case of using psql [1], once you've logged in to your database, you can run this query: $ select * from snapshots where vm_id = ''; This one would list the snapshots associated with a VM by its id. In case you don't have you vm_id, you can locate it by querying: $ select * from vms where vm_name = 'nil'; This one would show you some details about a VM by its name (including the vm's id). Once you've found the relevant snapshot, you can delete it by running: $ delete from snapshots where snapshot_id = ''; This one would delete the desired snapshot from the database. Since it's a delete operation, I would suggest confirming the ids before executing it. Hope you've found it useful! [1] https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_Remote_PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ *Regards,* *Shani Leviim* On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec wrote: > Hi, > > i've a problem with snapshot. On one VM i've a "snapshot" ghost without > name or uuid, only information is size (see attachment). In the snapshot > tab there is no trace about this disk. > > In database (table images) i found this : > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > | 2 | 4 | 17e26476-cecb-441d-a5f7-46ab3ef387ee | > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f | > 1 | 2 > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > | 2 | 4 | bf834a91-c69f-4d2c-b639-116ed58296d8 | > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f | > 1 | 2 > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > But i does not know which line is my disk. Is it possible to delete > directly into database? > Or is it better to dump my disk to another new and delete the "corrupted > one"? > > Another thing, when i try to move the disk to another storage domain i > always get "uncaght exeption occured ..." and no error in engine.log. > > > Thank you for helping. > > -- > Lionel Caignec > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > From ykaul at redhat.com Mon Feb 26 07:42:57 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 26 Feb 2018 09:42:57 +0200 Subject: [ovirt-users] problem adding new host to ovirt 4.2 In-Reply-To: <1A44B6F9-9B7C-4B00-82B6-DA7FED04D6D0@xs4all.nl> References: <032f01d3ad36$1cb3eff0$561bcfd0$@aristos.net> <1A44B6F9-9B7C-4B00-82B6-DA7FED04D6D0@xs4all.nl> Message-ID: On Sat, Feb 24, 2018 at 9:32 AM, Joop van de Wege wrote: > On February 24, 2018 7:10:00 AM GMT+01:00, Aristos Vasiliou < > aristos at aristos.net> wrote: > >Hi, > > > > > > > >I've set up a couple of machines to test out ovirt. > > > > > > > >1. centos 7 machine running ovirt 4.2 (kvm-manager) > > > >2. centos 7 machine running libvirt (kvm-server) > > > > > > > >Using the ovirt web interface, I am trying to add a host (machine > >number 2). > >I define the IP, user, pass, click OK, and again OK, confirming I don't > >want > >to use power management. The status of the new host is now "Installing" > >and > >after a few seconds becomes "Install failed" > > > > > > > >I have a couple of error messages in the Events tab: > > > > > > > >- An error has occurred during installation of Host > >kvm-server: > >Failed to execute stage 'Setup validation': Cannot locate ovirt-host > >package, possible cause is incorrect channels. > > > >- Host kvm-server installation failed. Command returned > >failure > >code 1 during SSH session 'root at kvm-server.home.local'. > > > Looks like it's missing the ovirt repo on the kvm server. > Correct. I'm wondering if there's an easy way to check for an existence of a repo, thus providing a more concrete error message to the user. I think yum repolist might be useful, but I don't like it use of a free text (instead of the URL or some canonical ID for a repo...) Y. > > Regards, > > Jooo > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at devels.es Mon Feb 26 08:58:36 2018 From: nicolas at devels.es (nicolas at devels.es) Date: Mon, 26 Feb 2018 08:58:36 +0000 Subject: [ovirt-users] VMs stuck in migrating state Message-ID: Hi, We're running 4.1.9 and during the weekend we had a storage issue that seemed to leave some hosts in an strange state. One of the hosts has almost all VMs migrating (although it seems to not actually being migrating them) and the migration state cannot be cancelled. When clicking on one of those machines and selecting 'Cancel migration', in the ovirt-engine log I see: 2018-02-26 08:52:07,588Z INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f] HostName = host2.domain.com 2018-02-26 08:52:07,588Z ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.CancelMigrateVDSCommand] (org.ovirt.thread.pool-6-thread-36) [887dfbf9-dece-4f7b-90a8-dac02b849b7f] Command 'CancelMigrateVDSCommand(HostName = host2.domain.com, CancelMigrationVDSParameters:{runAsync='true', hostId='e63b9146-10c4-47ad-bd6c-f053a8c5b4eb', vmId='26d37e43-32e2-4e55-9c1e-1438518d5021'})' execution failed: VDSGenericException: VDSErrorException: Failed to CancelMigrateVDS, error = Migration process cancelled, code = 82 On the vdsm side I see: 2018-02-26 08:56:19,396+0000 INFO (jsonrpc/0) [vdsm.api] START migrateCancel() from=::ffff:10.X.X.X,54654, flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 (api:46) 2018-02-26 08:56:19,398+0000 INFO (jsonrpc/0) [vdsm.api] FINISH migrateCancel return={'status': {'message': 'Migration process cancelled', 'code': 82}, 'progress': 0} from=::ffff:10.X.X.X,54654, flow_id=874d36d7-63f5-4b71-8a4d-6d9f3ec65858 (api:52) So no error on the vdsm side log. I already tried restarting ovirt-engine but it didn't work. Could someone shed some light on how to cancel the migration status for these machines? All of them seem to be running on the same host. Thanks. From mperina at redhat.com Mon Feb 26 09:34:29 2018 From: mperina at redhat.com (Martin Perina) Date: Mon, 26 Feb 2018 10:34:29 +0100 Subject: [ovirt-users] Power Management - Supermicro SuperBlade In-Reply-To: References: Message-ID: On Sun, Feb 25, 2018 at 7:53 AM, Scott Harvanek wrote: > Hoping someone can help here, I've looked and can't find any examples on > this. > > I've got some SuperBlade chassis and the blades are managed via the > chassis controller. What is the proper way to configure power management > then via the controller? You can control individual blades via the > SMCIPMItool but I'm not entirely sure how to configure that inside of Ovirt > for power management, does anyone have any experience on this or can point > me to some good docs? > ?According to [1] those servers should support IPMI, so you could try ipmilan fence agent and most probably try to add lanplus=1 into Options field of an agent. If it doesn't work as expected, could you please try to execute below commands and share the output? fence_ipmilan -a -l -p -P -vvv -o status Thanks Martin [1] https://www.supermicro.com/products/SuperBlade/management/? > Cheers! > > Scott H. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperina at redhat.com Mon Feb 26 09:38:22 2018 From: mperina at redhat.com (Martin Perina) Date: Mon, 26 Feb 2018 10:38:22 +0100 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 11:34 AM, Terry hey wrote: > Dear Martin, > I am very sorry that i reply you so late. > Do you mean that 4.2 can support ilo5 by selecting the option "ilo4" in > power management? > ?Yes ? > "from the error message below I'd say that you are either not using > correct IP address of iLO5 interface or you haven't enabled remote access > to your iLO5 interface" > I just try it and double confirm that i did not type a wrong IP. But the > error message is same. > ?Unfortunately I don't have iLO5 server available, so I cannot provide more details. Anyway could you please double check your server documentation, that you have enabled access to iLO5 IPMI interface correctly? And could you please share output of following command? ? f ?? ence_ilo4 -a -l -p -v -o status Thanks Martin ? > > Regards > Terry > > 2018-02-08 16:13 GMT+08:00 Martin Perina : > >> Hi Terry, >> >> from the error message below I'd say that you are either not using >> correct IP address of iLO5 interface or you haven't enabled remote access >> to your iLO5 interface. >> According to [1] iLO5 should fully IPMI compatible. So are you sure that >> you enabled the remote access to your iLO5 address in iLO5 management? >> Please consult [1] how to enable everything and use a user with at least >> Operator privileges. >> >> Regards >> >> Martin >> >> [1] https://support.hpe.com/hpsc/doc/public/display?docId=a00018324en_us >> >> >> On Thu, Feb 8, 2018 at 7:57 AM, Terry hey wrote: >> >>> Dear Martin, >>> >>> Thank you for helping me. To answer your question, >>> 1. Does the Test in Edit fence agent dialog work?? >>> Ans: it shows that "Test failed: Internal JSON-RPC error" >>> >>> Regardless the fail result, i press "OK" to enable power management. >>> There are four event log appear in "Events" >>> ********************************The follwing are the log in >>> "Event""******************************** >>> Host host01 configuration was updated by admin at internal-authz. >>> Kdump integration is enabled for host hostv01, but kdump is not >>> configured properly on host. >>> Health check on Host host01 indicates that future attempts to Stop this >>> host using Power-Management are expected to fail. >>> Health check on Host host01 indicates that future attempts to Start this >>> host using Power-Management are expected to fail. >>> >>> 2. If not could you please try to install fence-agents-all package on >>> different host and execute? >>> Ans: It just shows "Connection timed out". >>> >>> So, does it means that it is not support iLo5 now or i configure wrongly? >>> >>> Regards, >>> Terry >>> >>> 2018-02-02 15:46 GMT+08:00 Martin Perina : >>> >>>> >>>> >>>> On Fri, Feb 2, 2018 at 5:40 AM, Terry hey >>>> wrote: >>>> >>>>> Dear Martin, >>>>> >>>>> Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to setup >>>>> oVirt Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. Therefore, i >>>>> would like to ask whether oVirt power management support iLO5 or not. >>>>> >>>> >>>> ?We don't have any hardware with iLO5 available, but there is a good >>>> chance that it will be compatible with iLO4. Have you tried to setup your >>>> server with iLO4? Does the Test in Edit fence agent dialog work?? If not >>>> could you please try to install fence-agents-all package on different host >>>> and execute following: >>>> >>>> ?? >>>> f >>>> ?? >>>> ence_ilo4 -a -l -p -v -o status >>>> >>>> and share the output? >>>> >>>> Thanks >>>> >>>> Martin >>>> >>>> >>>>> If not, do you have any idea to setup power management with HP G10? >>>>> >>>>> Regards, >>>>> Terry >>>>> >>>>> 2018-02-01 16:21 GMT+08:00 Martin Perina : >>>>> >>>>>> >>>>>> >>>>>> On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < >>>>>> lorenzetto.luca at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. >>>>>>> Try using the standard ipmi. >>>>>>> >>>>>> >>>>>> ?It's not just an alias, ilo3/ilo4 also have different defaults than >>>>>> ipmilan. For example if you use ilo4, then by default following is used: >>>>>> >>>>>> ? >>>>>> >>>>>> ?lanplus=1 >>>>>> power_wait=4 >>>>>> >>>>>> ?So I recommend to start with ilo4 and add any necessary custom >>>>>> options into Options field. If you need some custom >>>>>> options, could you please share them with us? It would be very >>>>>> helpful for us, if needed we could introduce ilo5 with >>>>>> different defaults then ilo4 >>>>>> >>>>>> Thanks >>>>>> >>>>>> Martin >>>>>> >>>>>> >>>>>>> Luca >>>>>>> >>>>>>> >>>>>>> >>>>>>> Il 31 gen 2018 11:14 PM, "Terry hey" ha >>>>>>> scritto: >>>>>>> >>>>>>>> Dear all, >>>>>>>> Did oVirt 4.2 Power management support iLO5 as i could not see iLO5 >>>>>>>> option in Power Management. >>>>>>>> >>>>>>>> Regards >>>>>>>> Terry >>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>> _______________________________________________ >>>>>>> Users mailing list >>>>>>> Users at ovirt.org >>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Martin Perina >>>>>> Associate Manager, Software Engineering >>>>>> Red Hat Czech s.r.o. >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Martin Perina >>>> Associate Manager, Software Engineering >>>> Red Hat Czech s.r.o. >>>> >>> >>> >> >> >> -- >> Martin Perina >> Associate Manager, Software Engineering >> Red Hat Czech s.r.o. >> > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsoyer at systea.fr Mon Feb 26 10:23:13 2018 From: fsoyer at systea.fr (fsoyer) Date: Mon, 26 Feb 2018 11:23:13 +0100 Subject: [ovirt-users] =?utf-8?q?VMs_with_multiple_vdisks_don=27t_migrate?= In-Reply-To: <87y3jkrsmp.fsf@redhat.com> Message-ID: <3c75-5a93e000-d3-20359180@258179448> Hi, I don't beleive that this is relatd to a host, tests have been done from victor source to ginger dest and ginger to victor. I don't see problems on storage (gluster 3.12 native managed by ovirt), when VMs with a uniq disk from 20 to 250G migrate without error in some seconds and with no downtime. How ca I enable this libvirt debug mode ? -- Cordialement, Frank Soyer ? Le Vendredi, F?vrier 23, 2018 09:56 CET, Milan Zamazal a ?crit: ?Maor Lipchuk writes: > I encountered a bug (see [1]) which contains the same error mentioned in > your VDSM logs (see [2]), but I doubt it is related. Indeed, it's not related. The error in vdsm_victor.log just means that the info gathering call tries to access libvirt domain before the incoming migration is completed. It's ugly but harmless. > Milan, maybe you have any advice to troubleshoot the issue? Will the > libvirt/qemu logs can help? It seems there is something wrong on (at least) the source host. There are no migration progress messages in the vdsm_ginger.log and there are warnings about stale stat samples. That looks like problems with calling libvirt ? slow and/or stuck calls, maybe due to storage problems. The possibly faulty second disk could cause that. libvirt debug logs could tell us whether that is indeed the problem and whether it is caused by storage or something else. > I would suggest to open a bug on that issue so we can track it more > properly. > > Regards, > Maor > > > [1] > https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to > VM running on 2 Hosts > > [2] > 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] > Internal server error (__init__:577) > Traceback (most recent call last): > File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, > in _handle_request > res = method(**params) > File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in > _dynamicMethod > result = fn(*methodArgs) > File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies > io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() > File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies > 'current_values': v.getIoTune()} > File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune > result = self.getIoTuneResponse() > File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse > res = self._dom.blockIoTune( > File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, > in __getattr__ > % self.vmid) > NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not > started yet or was shut down > > On Thu, Feb 22, 2018 at 4:22 PM, fsoyer wrote: > >> Hi, >> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger >> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), >> while the engine.log in the first mail on 2018-02-12 was for VMs standing >> on victor, migrated (or failed to migrate...) to ginger. Symptoms were >> exactly the same, in both directions, and VMs works like a charm before, >> and even after (migration "killed" by a poweroff of VMs). >> Am I the only one experimenting this problem ? >> >> >> Thanks >> -- >> >> Cordialement, >> >> *Frank Soyer * >> >> >> >> Le Jeudi, F?vrier 22, 2018 00:45 CET, Maor Lipchuk >> a ?crit: >> >> >> Hi Frank, >> >> Sorry about the delay repond. >> I've been going through the logs you attached, although I could not find >> any specific indication why the migration failed because of the disk you >> were mentionning. >> Does this VM run with both disks on the target host without migration? >> >> Regards, >> Maor >> >> >> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer wrote: >>> >>> Hi Maor, >>> sorry for the double post, I've change the email adress of my account and >>> supposed that I'd need to re-post it. >>> And thank you for your time. Here are the logs. I added a vdisk to an >>> existing VM : it no more migrates, needing to poweroff it after minutes. >>> Then simply deleting the second disk makes migrate it in exactly 9s without >>> problem ! >>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 >>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d >>> >>> -- >>> >>> Cordialement, >>> >>> *Frank Soyer * >>> Le Mercredi, F?vrier 14, 2018 11:04 CET, Maor Lipchuk < >>> mlipchuk at redhat.com> a ?crit: >>> >>> >>> Hi Frank, >>> >>> I already replied on your last email. >>> Can you provide the VDSM logs from the time of the migration failure for >>> both hosts: >>> ginger.local.systea.f r and v >>> ictor.local.systea.fr >>> >>> Thanks, >>> Maor >>> >>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: >>>> >>>> Hi all, >>>> I discovered yesterday a problem when migrating VM with more than one >>>> vdisk. >>>> On our test servers (oVirt4.1, shared storage with Gluster), I created 2 >>>> VMs needed for a test, from a template with a 20G vdisk. On this VMs I >>>> added a 100G vdisk (for this tests I didn't want to waste time to extend >>>> the existing vdisks... But I lost time finally...). The VMs with the 2 >>>> vdisks works well. >>>> Now I saw some updates waiting on the host. I tried to put it in >>>> maintenance... But it stopped on the two VM. They were marked "migrating", >>>> but no more accessible. Other (small) VMs with only 1 vdisk was migrated >>>> without problem at the same time. >>>> I saw that a kvm process for the (big) VMs was launched on the source >>>> AND destination host, but after tens of minutes, the migration and the VMs >>>> was always freezed. I tried to cancel the migration for the VMs : failed. >>>> The only way to stop it was to poweroff the VMs : the kvm process died on >>>> the 2 hosts and the GUI alerted on a failed migration. >>>> In doubt, I tried to delete the second vdisk on one of this VMs : it >>>> migrates then without error ! And no access problem. >>>> I tried to extend the first vdisk of the second VM, the delete the >>>> second vdisk : it migrates now without problem ! >>>> >>>> So after another test with a VM with 2 vdisks, I can say that this >>>> blocked the migration process :( >>>> >>>> In engine.log, for a VMs with 1 vdisk migrating well, we see : >>>> >>>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired >>>> to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>>> sharedLocks=''}' >>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>> Running command: MigrateVmToServerCommand internal: false. Entities >>>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction >>>> group MIGRATE_VM with role type USER >>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>> params=[]}}]]'}), log id: 14f61ee0 >>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>>> [2f712024-5982-46a8-82c8-fd8293da5725] START, >>>> MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, >>>> MigrateVDSCommandParameters:{runAsync='true', >>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>> params=[]}}]]'}), log id: 775cd381 >>>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, >>>> log id: 775cd381 >>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 >>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db >>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) >>>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: >>>> VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, >>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom >>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>>> ginger.local.systea.fr, User: admin at internal-authz). >>>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>>> START, FullListVDSCommand(HostName = victor.local.systea.fr, >>>> FullListVDSCommandParameters:{runAsync='true', >>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 >>>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>>> emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, >>>> guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>>> timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, >>>> guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, >>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>> vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, >>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, >>>> kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, >>>> devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, >>>> clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 >>>> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) >>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' >>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>> [54a65b66] Received a vnc Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>> displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>> port=5901} >>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>> [54a65b66] Received a lease Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>> was unexpectedly detected as 'MigratingTo' on VDS >>>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) >>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') >>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>>> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( >>>> ginger.local.systea.fr) ignoring it in the refresh until migration is >>>> done >>>> .... >>>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>>> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>> victor.local.systea.fr) >>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, >>>> DestroyVDSCommand(HostName = victor.local.systea.fr, >>>> DestroyVmVDSCommandParameters:{runAsync='true', >>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', >>>> secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log >>>> id: 560eca57 >>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, >>>> DestroyVDSCommand, log id: 560eca57 >>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>> moved from 'MigratingFrom' --> 'Down' >>>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status >>>> 'MigratingTo' >>>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>> moved from 'MigratingTo' --> 'Up' >>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>>> START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, >>>> MigrateStatusVDSCommandParameters:{runAsync='true', >>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 >>>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281 >>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db >>>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] >>>> EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: >>>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: >>>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: >>>> null, Custom Event ID: -1, Message: Migration completed (VM: >>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual >>>> downtime: (N/A)) >>>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (ForkJoinPool-1-worker-4) [] Lock freed to object >>>> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>>> sharedLocks=''}' >>>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, >>>> FullListVDSCommand(HostName = ginger.local.systea.fr, >>>> FullListVDSCommandParameters:{runAsync='true', >>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 >>>> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, >>>> FullListVDSCommand, return: [{acpiEnable=true, >>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>>> tabletEnable=true, pid=18748, guestDiskMapping={}, >>>> transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, >>>> guestNumaNodes=[Ljava.lang.Object;@760085fd, >>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, >>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 >>>> <(430)%20425-9600>, display=vnc}], log id: 7cc65298 >>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>> Received a vnc Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>> port=5901} >>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>> Received a lease Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] >>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>>> tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H >>>> ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>>> timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj >>>> ect;@77951faf, custom={device_fbddd528-7d93-4 >>>> 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc >>>> c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>> customProperties='[]', snapshotId='null', logicalName='null', >>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, >>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 >>>> <(430)%20426-3620>, display=vnc}], log id: 58cdef4c >>>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>> [7fcb200a] Received a vnc Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>> port=5901} >>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>> [7fcb200a] Received a lease Device without an address when processing VM >>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>> >>>> >>>> >>>> >>>> For the VM with 2 vdisks we see : >>>> >>>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired >>>> to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', >>>> sharedLocks=''}' >>>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>> Running command: MigrateVmToServerCommand internal: false. Entities >>>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction >>>> group MIGRATE_VM with role type USER >>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>> params=[]}}]]'}), log id: 3702a9e0 >>>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, >>>> MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, >>>> MigrateVDSCommandParameters:{runAsync='true', >>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>> params=[]}}]]'}), log id: 1840069c >>>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, >>>> log id: 1840069c >>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 >>>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db >>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) >>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: >>>> VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, >>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom >>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>> Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: >>>> victor.local.systea.fr, User: admin at internal-authz). >>>> ... >>>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro >>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) >>>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' >>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>>> was unexpectedly detected as 'MigratingTo' on VDS >>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>> victor.local.systea.fr) ignoring it in the refresh until migration is >>>> done >>>> ... >>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>>> was unexpectedly detected as 'MigratingTo' on VDS >>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>> victor.local.systea.fr) ignoring it in the refresh until migration is >>>> done >>>> >>>> >>>> >>>> and so on, last lines repeated indefinitly for hours since we poweroff >>>> the VM... >>>> Is this something known ? Any idea about that ? >>>> >>>> Thanks >>>> >>>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. >>>> >>>> -- >>>> >>>> Cordialement, >>>> >>>> *Frank Soyer * >>>> >>>> _______________________________________________ >>>> Users mailing list >>>> Users at ovirt.org >>>> http://lists.ovirt.org/mailman/listinfo/users >>>> >>> >>> >>> >>> >> >> >> ? ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Mon Feb 26 11:30:04 2018 From: frolland at redhat.com (Fred Rolland) Date: Mon, 26 Feb 2018 13:30:04 +0200 Subject: [ovirt-users] Can't move/copy VM disks between Data Centers In-Reply-To: <0D950DC4-A8E3-4A39-B557-5E122AA38DE6@starlett.lv> References: <0D950DC4-A8E3-4A39-B557-5E122AA38DE6@starlett.lv> Message-ID: Hi, Which version are you using? in 4.1 , the support of adding shared storage to local DC was added [1]. You can copy/move disks to the shared storage domain, then detach the SD and attach to another DC. In any case, you wont be able to live migrate VMs from the local DC, it is not supported. Regards, Fred [1] https://ovirt.org/develop/release-management/features/storage/sharedStorageDomainsAttachedToLocalDC/ On Fri, Feb 23, 2018 at 1:35 PM, Andrei V wrote: > Hi, > > I have oVirt setup, separate PC host engine + 2 nodes (#10 + #11) with > local storage domains (internal RAIDs). > 1st node #10 is currently active and can?t be turned off. > > Since oVirt doesn?t support more then 1 host in data center with local > storage domain as described here: > http://lists.ovirt.org/pipermail/users/2018-January/086118.html > defined another data center with 1 node #11. > > Problem: > 1) can?t copy or move VM disks from node #10 (even of inactive VMs) to > node #11, this node is NOT being shown as possible destination. > 2) can?t migrate active VMs to node #11. > 3) Added NFS shares to data center #1 -> node #10, but can?t change data > center #1 -> storage type to Shared, because this operation requires > detachment of local storage domains, which is not possible, several VMs are > active and can?t be stopped. > > VM disks placed on local storage domains because of performance > limitations of our 1Gbit network. > 2 VMs running our accounting/inventory control system, and are critical to > NFS storage performance limits. > > How to solve this problem ? > Thanks in advance. > > Andrei > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Mon Feb 26 11:30:30 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 26 Feb 2018 13:30:30 +0200 Subject: [ovirt-users] Q: VM disk speed penalty NFS vs Local data center storage Message-ID: <2056C30C-2888-4394-A0F5-5E2DFA5716EA@starlett.lv> Hi, Since oVirt doesn?t support more then 1 host in data center with local storage domain as described here: http://lists.ovirt.org/pipermail/users/2018-January/086118.html I have to setup NFS server on node with VMs (on same node) access via NFS. 10 GB shared storage is in the future plans yet right now have only 2 nodes with local RAID on each. Q: What is VM disk speed penalty (approx %) NFS vs Local RAID in oVirt data center storage? Currently I have 2 VMs running our accounting/inventory control system which are critical to storage performance limits. 2 other VMs have very low disk activity. Thanks in advance Andrei Verovski -------------- next part -------------- An HTML attachment was scrubbed... URL: From andreil1 at starlett.lv Mon Feb 26 11:46:28 2018 From: andreil1 at starlett.lv (Andrei Verovski) Date: Mon, 26 Feb 2018 13:46:28 +0200 Subject: [ovirt-users] Can't move/copy VM disks between Data Centers In-Reply-To: References: <0D950DC4-A8E3-4A39-B557-5E122AA38DE6@starlett.lv> Message-ID: <4023F78E-1E84-439B-B89A-718C366B2C80@starlett.lv> Hi, Thanks for clarification. I?m using 4.2. Anyway, I have to define another data center with shared storage domain (since data center with local storage domain can have only 1 host), and the do what you have described. Is it possible to copy VM disks from 1 data center #1 local storage domain to another data center #2 NFS storage domain, or need to use export storage domain ? > On 26 Feb 2018, at 13:30, Fred Rolland wrote: > > Hi, > Which version are you using? > > in 4.1 , the support of adding shared storage to local DC was added [1]. > You can copy/move disks to the shared storage domain, then detach the SD and attach to another DC. > > In any case, you wont be able to live migrate VMs from the local DC, it is not supported. > > Regards, > Fred > > [1] https://ovirt.org/develop/release-management/features/storage/sharedStorageDomainsAttachedToLocalDC/ > > On Fri, Feb 23, 2018 at 1:35 PM, Andrei V > wrote: > Hi, > > I have oVirt setup, separate PC host engine + 2 nodes (#10 + #11) with local storage domains (internal RAIDs). > 1st node #10 is currently active and can?t be turned off. > > Since oVirt doesn?t support more then 1 host in data center with local storage domain as described here: > http://lists.ovirt.org/pipermail/users/2018-January/086118.html > defined another data center with 1 node #11. > > Problem: > 1) can?t copy or move VM disks from node #10 (even of inactive VMs) to node #11, this node is NOT being shown as possible destination. > 2) can?t migrate active VMs to node #11. > 3) Added NFS shares to data center #1 -> node #10, but can?t change data center #1 -> storage type to Shared, because this operation requires detachment of local storage domains, which is not possible, several VMs are active and can?t be stopped. > > VM disks placed on local storage domains because of performance limitations of our 1Gbit network. > 2 VMs running our accounting/inventory control system, and are critical to NFS storage performance limits. > > How to solve this problem ? > Thanks in advance. > > Andrei > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykaul at redhat.com Mon Feb 26 11:49:31 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Mon, 26 Feb 2018 13:49:31 +0200 Subject: [ovirt-users] Q: VM disk speed penalty NFS vs Local data center storage In-Reply-To: <2056C30C-2888-4394-A0F5-5E2DFA5716EA@starlett.lv> References: <2056C30C-2888-4394-A0F5-5E2DFA5716EA@starlett.lv> Message-ID: On Mon, Feb 26, 2018 at 1:30 PM, Andrei Verovski wrote: > Hi, > > Since oVirt doesn?t support more then 1 host in data center with local > storage domain as described here: > http://lists.ovirt.org/pipermail/users/2018-January/086118.html > > I have to setup NFS server on node with VMs (on same node) access via NFS. > 10 GB shared storage is in the future plans yet right now have only 2 > nodes with local RAID on each. > > Q: What is VM disk speed penalty (approx %) NFS vs Local RAID in oVirt > data center storage? > Currently I have 2 VMs running our accounting/inventory control system > which are critical to storage performance limits. > 2 other VMs have very low disk activity. > I don't know, but please remember there's both latency and throughput, both of which are somewhat affected. Throughput will benefit from jumbo frames, for example. Unfortunately it may affect latency a bit. There was an interesting patch that if the NFS was local, bypassed the NFS[1]. It was never completed and merged. Lastly, chapter 6 at hyper-converged guide (should be available in few hours) might be an interesting idea for you to consider - a single Gluster that can later expand. Y. [1] https://gerrit.ovirt.org/#/c/68822/ [2] https://ovirt.org/documentation/gluster-hyperconverged/Gluster_Hyperconverged_Guide/ > > Thanks in advance > Andrei Verovski > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mzamazal at redhat.com Mon Feb 26 11:59:37 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Mon, 26 Feb 2018 12:59:37 +0100 Subject: [ovirt-users] VMs with multiple vdisks don't migrate In-Reply-To: <3c75-5a93e000-d3-20359180@258179448> (fsoyer@systea.fr's message of "Mon, 26 Feb 2018 11:23:13 +0100") References: <3c75-5a93e000-d3-20359180@258179448> Message-ID: <878tbg9d1y.fsf@redhat.com> "fsoyer" writes: > I don't beleive that this is relatd to a host, tests have been done from victor > source to ginger dest and ginger to victor. I don't see problems on storage > (gluster 3.12 native managed by ovirt), when VMs with a uniq disk from 20 to > 250G migrate without error in some seconds and with no downtime. The host itself may be fine, but libvirt/QEMU running there may expose problems, perhaps just for some VMs. According to your logs something is not behaving as expected on the source host during the faulty migration. > How ca I enable this libvirt debug mode ? Set the following options in /etc/libvirt/libvirtd.conf (look for examples in comments there) - log_level=1 - log_outputs="1:file:/var/log/libvirt/libvirtd.log" and restart libvirt. Then /var/log/libvirt/libvirtd.log should contain the log. It will be huge, so I suggest to enable it only for the time of reproducing the problem. > -- > > Cordialement, > > Frank Soyer > > ? > > Le Vendredi, F?vrier 23, 2018 09:56 CET, Milan Zamazal a ?crit: > ?Maor Lipchuk writes: > >> I encountered a bug (see [1]) which contains the same error mentioned in >> your VDSM logs (see [2]), but I doubt it is related. > > Indeed, it's not related. > > The error in vdsm_victor.log just means that the info gathering call > tries to access libvirt domain before the incoming migration is > completed. It's ugly but harmless. > >> Milan, maybe you have any advice to troubleshoot the issue? Will the >> libvirt/qemu logs can help? > > It seems there is something wrong on (at least) the source host. There > are no migration progress messages in the vdsm_ginger.log and there are > warnings about stale stat samples. That looks like problems with > calling libvirt ? slow and/or stuck calls, maybe due to storage > problems. The possibly faulty second disk could cause that. > > libvirt debug logs could tell us whether that is indeed the problem and > whether it is caused by storage or something else. > >> I would suggest to open a bug on that issue so we can track it more >> properly. >> >> Regards, >> Maor >> >> >> [1] >> https://bugzilla.redhat.com/show_bug.cgi?id=1486543 - Migration leads to >> VM running on 2 Hosts >> >> [2] >> 2018-02-16 09:43:35,236+0100 ERROR (jsonrpc/7) [jsonrpc.JsonRpcServer] >> Internal server error (__init__:577) >> Traceback (most recent call last): >> File "/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 572, >> in _handle_request >> res = method(**params) >> File "/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in >> _dynamicMethod >> result = fn(*methodArgs) >> File "/usr/share/vdsm/API.py", line 1454, in getAllVmIoTunePolicies >> io_tune_policies_dict = self._cif.getAllVmIoTunePolicies() >> File "/usr/share/vdsm/clientIF.py", line 454, in getAllVmIoTunePolicies >> 'current_values': v.getIoTune()} >> File "/usr/share/vdsm/virt/vm.py", line 2859, in getIoTune >> result = self.getIoTuneResponse() >> File "/usr/share/vdsm/virt/vm.py", line 2878, in getIoTuneResponse >> res = self._dom.blockIoTune( >> File "/usr/lib/python2.7/site-packages/vdsm/virt/virdomain.py", line 47, >> in __getattr__ >> % self.vmid) >> NotConnectedError: VM u'755cf168-de65-42ed-b22f-efe9136f7594' was not >> started yet or was shut down >> >> On Thu, Feb 22, 2018 at 4:22 PM, fsoyer wrote: >> >>> Hi, >>> Yes, on 2018-02-16 (vdsm logs) I tried with a VM standing on ginger >>> (192.168.0.6) migrated (or failed to migrate...) to victor (192.168.0.5), >>> while the engine.log in the first mail on 2018-02-12 was for VMs standing >>> on victor, migrated (or failed to migrate...) to ginger. Symptoms were >>> exactly the same, in both directions, and VMs works like a charm before, >>> and even after (migration "killed" by a poweroff of VMs). >>> Am I the only one experimenting this problem ? >>> >>> >>> Thanks >>> -- >>> >>> Cordialement, >>> >>> *Frank Soyer * >>> >>> >>> >>> Le Jeudi, F?vrier 22, 2018 00:45 CET, Maor Lipchuk >>> a ?crit: >>> >>> >>> Hi Frank, >>> >>> Sorry about the delay repond. >>> I've been going through the logs you attached, although I could not find >>> any specific indication why the migration failed because of the disk you >>> were mentionning. >>> Does this VM run with both disks on the target host without migration? >>> >>> Regards, >>> Maor >>> >>> >>> On Fri, Feb 16, 2018 at 11:03 AM, fsoyer wrote: >>>> >>>> Hi Maor, >>>> sorry for the double post, I've change the email adress of my account and >>>> supposed that I'd need to re-post it. >>>> And thank you for your time. Here are the logs. I added a vdisk to an >>>> existing VM : it no more migrates, needing to poweroff it after minutes. >>>> Then simply deleting the second disk makes migrate it in exactly 9s without >>>> problem ! >>>> https://gist.github.com/fgth/4707446331d201eef574ac31b6e89561 >>>> https://gist.github.com/fgth/f8de9c22664aee53722af676bff8719d >>>> >>>> -- >>>> >>>> Cordialement, >>>> >>>> *Frank Soyer * >>>> Le Mercredi, F?vrier 14, 2018 11:04 CET, Maor Lipchuk < >>>> mlipchuk at redhat.com> a ?crit: >>>> >>>> >>>> Hi Frank, >>>> >>>> I already replied on your last email. >>>> Can you provide the VDSM logs from the time of the migration failure for >>>> both hosts: >>>> ginger.local.systea.f r and v >>>> ictor.local.systea.fr >>>> >>>> Thanks, >>>> Maor >>>> >>>> On Wed, Feb 14, 2018 at 11:23 AM, fsoyer wrote: >>>>> >>>>> Hi all, >>>>> I discovered yesterday a problem when migrating VM with more than one >>>>> vdisk. >>>>> On our test servers (oVirt4.1, shared storage with Gluster), I created 2 >>>>> VMs needed for a test, from a template with a 20G vdisk. On this VMs I >>>>> added a 100G vdisk (for this tests I didn't want to waste time to extend >>>>> the existing vdisks... But I lost time finally...). The VMs with the 2 >>>>> vdisks works well. >>>>> Now I saw some updates waiting on the host. I tried to put it in >>>>> maintenance... But it stopped on the two VM. They were marked "migrating", >>>>> but no more accessible. Other (small) VMs with only 1 vdisk was migrated >>>>> without problem at the same time. >>>>> I saw that a kvm process for the (big) VMs was launched on the source >>>>> AND destination host, but after tens of minutes, the migration and the VMs >>>>> was always freezed. I tried to cancel the migration for the VMs : failed. >>>>> The only way to stop it was to poweroff the VMs : the kvm process died on >>>>> the 2 hosts and the GUI alerted on a failed migration. >>>>> In doubt, I tried to delete the second vdisk on one of this VMs : it >>>>> migrates then without error ! And no access problem. >>>>> I tried to extend the first vdisk of the second VM, the delete the >>>>> second vdisk : it migrates now without problem ! >>>>> >>>>> So after another test with a VM with 2 vdisks, I can say that this >>>>> blocked the migration process :( >>>>> >>>>> In engine.log, for a VMs with 1 vdisk migrating well, we see : >>>>> >>>>> 2018-02-12 16:46:29,705+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (default task-28) [2f712024-5982-46a8-82c8-fd8293da5725] Lock Acquired >>>>> to object 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>>>> sharedLocks=''}' >>>>> 2018-02-12 16:46:29,955+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>>> Running command: MigrateVmToServerCommand internal: false. Entities >>>>> affected : ID: 3f57e669-5e4c-4d10-85cc-d573004a099d Type: VMAction >>>>> group MIGRATE_VM with role type USER >>>>> 2018-02-12 16:46:30,261+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>>> params=[]}}]]'}), log id: 14f61ee0 >>>>> 2018-02-12 16:46:30,262+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] START, >>>>> MigrateBrokerVDSCommand(HostName = victor.local.systea.fr, >>>>> MigrateVDSCommandParameters:{runAsync='true', >>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', srcHost='192.168.0.6', >>>>> dstVdsId='d569c2dd-8f30-4878-8aea-858db285cf69', dstHost=' >>>>> 192.168.0.5:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>>> params=[]}}]]'}), log id: 775cd381 >>>>> 2018-02-12 16:46:30,277+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] FINISH, MigrateBrokerVDSCommand, >>>>> log id: 775cd381 >>>>> 2018-02-12 16:46:30,285+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-32) [2f712024-5982-46a8-82c8-fd8293da5725] >>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 14f61ee0 >>>>> 2018-02-12 16:46:30,301+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-32) >>>>> [2f712024-5982-46a8-82c8-fd8293da5725] EVENT_ID: >>>>> VM_MIGRATION_START(62), Correlation ID: 2f712024-5982-46a8-82c8-fd8293da5725, >>>>> Job ID: 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom >>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>>>> ginger.local.systea.fr, User: admin at internal-authz). >>>>> 2018-02-12 16:46:31,106+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>>>> START, FullListVDSCommand(HostName = victor.local.systea.fr, >>>>> FullListVDSCommandParameters:{runAsync='true', >>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 54b4b435 >>>>> 2018-02-12 16:46:31,147+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler9) [54a65b66] >>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>>>> emulatedMachine=pc-i440fx-rhel7.3.0, tabletEnable=true, pid=1493, >>>>> guestDiskMapping={0QEMU_QEMU_HARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>>>> timeOffset=0, cpuType=Nehalem, smp=2, pauseCode=NOERR, >>>>> guestNumaNodes=[Ljava.lang.Object;@1d9042cd, smartcardEnable=false, >>>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>>> vmName=Oracle_SECONDARY, nice=0, status=Migration Source, maxMemSize=32768, >>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> numOfIoThreads=2, smpThreadsPerCore=1, memGuaranteedSize=8192, >>>>> kvmEnable=true, pitReinjection=false, displayNetwork=ovirtmgmt, >>>>> devices=[Ljava.lang.Object;@28ae66d7, display=vnc, maxVCpus=16, >>>>> clientIp=, statusTime=4299484520, maxMemSlots=16}], log id: 54b4b435 >>>>> 2018-02-12 16:46:31,150+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler1) >>>>> [27fac647] Fetched 3 VMs from VDS 'd569c2dd-8f30-4878-8aea-858db285cf69' >>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>>> [54a65b66] Received a vnc Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>>> displayIp=192.168.0.6}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>>> port=5901} >>>>> 2018-02-12 16:46:31,151+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler9) >>>>> [54a65b66] Received a lease Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'd569c2dd-8f30-4878-8aea-858db285cf69'(ginger.local.systea.fr) >>>>> (expected on 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1') >>>>> 2018-02-12 16:46:31,152+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler1) [27fac647] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>>>> is migrating to VDS 'd569c2dd-8f30-4878-8aea-858db285cf69'( >>>>> ginger.local.systea.fr) ignoring it in the refresh until migration is >>>>> done >>>>> .... >>>>> 2018-02-12 16:46:41,631+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d' >>>>> was reported as Down on VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>>> victor.local.systea.fr) >>>>> 2018-02-12 16:46:41,632+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] START, >>>>> DestroyVDSCommand(HostName = victor.local.systea.fr, >>>>> DestroyVmVDSCommandParameters:{runAsync='true', >>>>> hostId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d', force='false', >>>>> secondsToWait='0', gracefully='false', reason='', ignoreNoVm='true'}), log >>>>> id: 560eca57 >>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.DestroyVDSCommand] (ForkJoinPool-1-worker-11) [] FINISH, >>>>> DestroyVDSCommand, log id: 560eca57 >>>>> 2018-02-12 16:46:41,650+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>>> moved from 'MigratingFrom' --> 'Down' >>>>> 2018-02-12 16:46:41,651+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-11) [] Handing over VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>>> to Host 'd569c2dd-8f30-4878-8aea-858db285cf69'. Setting VM to status >>>>> 'MigratingTo' >>>>> 2018-02-12 16:46:42,163+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (ForkJoinPool-1-worker-4) [] VM '3f57e669-5e4c-4d10-85cc-d573004a099d'(Oracle_SECONDARY) >>>>> moved from 'MigratingTo' --> 'Up' >>>>> 2018-02-12 16:46:42,169+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>>>> START, MigrateStatusVDSCommand(HostName = ginger.local.systea.fr, >>>>> MigrateStatusVDSCommandParameters:{runAsync='true', >>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}), log id: 7a25c281 >>>>> 2018-02-12 16:46:42,174+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateStatusVDSCommand] (ForkJoinPool-1-worker-4) [] >>>>> FINISH, MigrateStatusVDSCommand, log id: 7a25c281 >>>>> 2018-02-12 16:46:42,194+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (ForkJoinPool-1-worker-4) [] >>>>> EVENT_ID: VM_MIGRATION_DONE(63), Correlation ID: >>>>> 2f712024-5982-46a8-82c8-fd8293da5725, Job ID: >>>>> 4bd19aa9-cc99-4d02-884e-5a1e857a7738, Call Stack: null, Custom ID: >>>>> null, Custom Event ID: -1, Message: Migration completed (VM: >>>>> Oracle_SECONDARY, Source: victor.local.systea.fr, Destination: >>>>> ginger.local.systea.fr, Duration: 11 seconds, Total: 11 seconds, Actual >>>>> downtime: (N/A)) >>>>> 2018-02-12 16:46:42,201+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (ForkJoinPool-1-worker-4) [] Lock freed to object >>>>> 'EngineLock:{exclusiveLocks='[3f57e669-5e4c-4d10-85cc-d573004a099d=VM]', >>>>> sharedLocks=''}' >>>>> 2018-02-12 16:46:42,203+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] START, >>>>> FullListVDSCommand(HostName = ginger.local.systea.fr, >>>>> FullListVDSCommandParameters:{runAsync='true', >>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmIds='[3f57e669-5e4c-4d10-85cc-d573004a099d]'}), log id: 7cc65298 >>>>> 2018-02-12 16:46:42,254+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (ForkJoinPool-1-worker-4) [] FINISH, >>>>> FullListVDSCommand, return: [{acpiEnable=true, >>>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>>>> tabletEnable=true, pid=18748, guestDiskMapping={}, >>>>> transparentHugePages=true, timeOffset=0, cpuType=Nehalem, smp=2, >>>>> guestNumaNodes=[Ljava.lang.Object;@760085fd, >>>>> custom={device_fbddd528-7d93-49c6-a286-180e021cb274device_87 >>>>> 9c93ab-4df1-435c-af02-565039fcc254=VmDevice:{id='VmDeviceId: >>>>> {deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@2e4d3dd3, >>>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304259600 >>>>> <(430)%20425-9600>, display=vnc}], log id: 7cc65298 >>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>>> Received a vnc Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>>> port=5901} >>>>> 2018-02-12 16:46:42,257+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (ForkJoinPool-1-worker-4) [] >>>>> Received a lease Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>>> 2018-02-12 16:46:46,260+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.FullListVDSCommand] (DefaultQuartzScheduler5) [7fcb200a] >>>>> FINISH, FullListVDSCommand, return: [{acpiEnable=true, >>>>> emulatedMachine=pc-i440fx-rhel7.3.0, afterMigrationStatus=, >>>>> tabletEnable=true, pid=18748, guestDiskMapping={0QEMU_QEMU_H >>>>> ARDDISK_d890fa68-fba4-4f49-9={name=/dev/sda}, >>>>> QEMU_DVD-ROM_QM00003={name=/dev/sr0}}, transparentHugePages=true, >>>>> timeOffset=0, cpuType=Nehalem, smp=2, guestNumaNodes=[Ljava.lang.Obj >>>>> ect;@77951faf, custom={device_fbddd528-7d93-4 >>>>> 9c6-a286-180e021cb274device_879c93ab-4df1-435c-af02-565039fc >>>>> c254=VmDevice:{id='VmDeviceId:{deviceId='879c93ab-4df1-435c-af02-565039fcc254', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel0', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}, device_fbddd528-7d93-49c6-a286 >>>>> -180e021cb274device_879c93ab-4df1-435c-af02-565039fcc254devi >>>>> ce_8945f61a-abbe-4156-8485-a4aa6f1908dbdevice_017b5e59-01c4- >>>>> 4aac-bf0c-b5d9557284d6=VmDevice:{id='VmDeviceId:{deviceId='0 >>>>> 17b5e59-01c4-4aac-bf0c-b5d9557284d6', vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', >>>>> device='tablet', type='UNKNOWN', bootOrder='0', specParams='[]', >>>>> address='{bus=0, type=usb, port=1}', managed='false', plugged='true', >>>>> readOnly='false', deviceAlias='input0', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274=VmDevice:{id='Vm >>>>> DeviceId:{deviceId='fbddd528-7d93-49c6-a286-180e021cb274', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='ide', >>>>> type='CONTROLLER', bootOrder='0', specParams='[]', address='{slot=0x01, >>>>> bus=0x00, domain=0x0000, type=pci, function=0x1}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='ide', customProperties='[]', >>>>> snapshotId='null', logicalName='null', hostDevice='null'}, >>>>> device_fbddd528-7d93-49c6-a286-180e021cb274device_879c93ab-4 >>>>> df1-435c-af02-565039fcc254device_8945f61a-abbe-4156-8485-a4a >>>>> a6f1908db=VmDevice:{id='VmDeviceId:{deviceId='8945f61a-abbe-4156-8485-a4aa6f1908db', >>>>> vmId='3f57e669-5e4c-4d10-85cc-d573004a099d'}', device='unix', >>>>> type='CHANNEL', bootOrder='0', specParams='[]', address='{bus=0, >>>>> controller=0, type=virtio-serial, port=2}', managed='false', >>>>> plugged='true', readOnly='false', deviceAlias='channel1', >>>>> customProperties='[]', snapshotId='null', logicalName='null', >>>>> hostDevice='null'}}, vmType=kvm, memSize=8192, smpCoresPerSocket=1, >>>>> vmName=Oracle_SECONDARY, nice=0, status=Up, maxMemSize=32768, >>>>> bootMenuEnable=false, vmId=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> numOfIoThreads=2, smpThreadsPerCore=1, smartcardEnable=false, >>>>> maxMemSlots=16, kvmEnable=true, pitReinjection=false, >>>>> displayNetwork=ovirtmgmt, devices=[Ljava.lang.Object;@286410fd, >>>>> memGuaranteedSize=8192, maxVCpus=16, clientIp=, statusTime=4304263620 >>>>> <(430)%20426-3620>, display=vnc}], log id: 58cdef4c >>>>> 2018-02-12 16:46:46,267+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>>> [7fcb200a] Received a vnc Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {device=vnc, specParams={displayNetwork=ovirtmgmt, keyMap=fr, >>>>> displayIp=192.168.0.5}, type=graphics, deviceId=813957b1-446a-4e88-9e40-9fe76d2c442d, >>>>> port=5901} >>>>> 2018-02-12 16:46:46,268+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmDevicesMonitoring] (DefaultQuartzScheduler5) >>>>> [7fcb200a] Received a lease Device without an address when processing VM >>>>> 3f57e669-5e4c-4d10-85cc-d573004a099d devices, skipping device: >>>>> {lease_id=3f57e669-5e4c-4d10-85cc-d573004a099d, >>>>> sd_id=1e51cecc-eb2e-47d0-b185-920fdc7afa16, >>>>> deviceId={uuid=a09949aa-5642-4b6d-94a4-8b0d04257be5}, offset=6291456, >>>>> device=lease, path=/rhev/data-center/mnt/glusterSD/192.168.0.6: >>>>> _DATA01/1e51cecc-eb2e-47d0-b185-920fdc7afa16/dom_md/xleases, type=lease} >>>>> >>>>> >>>>> >>>>> >>>>> For the VM with 2 vdisks we see : >>>>> >>>>> 2018-02-12 16:49:06,112+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (default task-50) [92b5af33-cb87-4142-b8fe-8b838dd7458e] Lock Acquired >>>>> to object 'EngineLock:{exclusiveLocks='[f7d4ec12-627a-4b83-b59e-886400d55474=VM]', >>>>> sharedLocks=''}' >>>>> 2018-02-12 16:49:06,407+01 INFO [org.ovirt.engine.core.bll.MigrateVmToServerCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>>> Running command: MigrateVmToServerCommand internal: false. Entities >>>>> affected : ID: f7d4ec12-627a-4b83-b59e-886400d55474 Type: VMAction >>>>> group MIGRATE_VM with role type USER >>>>> 2018-02-12 16:49:06,712+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>>> START, MigrateVDSCommand( MigrateVDSCommandParameters:{runAsync='true', >>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>>> params=[]}}]]'}), log id: 3702a9e0 >>>>> 2018-02-12 16:49:06,713+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] START, >>>>> MigrateBrokerVDSCommand(HostName = ginger.local.systea.fr, >>>>> MigrateVDSCommandParameters:{runAsync='true', >>>>> hostId='d569c2dd-8f30-4878-8aea-858db285cf69', >>>>> vmId='f7d4ec12-627a-4b83-b59e-886400d55474', srcHost='192.168.0.5', >>>>> dstVdsId='ce3938b1-b23f-4d22-840a-f17d7cd87bb1', dstHost=' >>>>> 192.168.0.6:54321', migrationMethod='ONLINE', tunnelMigration='false', >>>>> migrationDowntime='0', autoConverge='true', migrateCompressed='false', >>>>> consoleAddress='null', maxBandwidth='500', enableGuestEvents='true', >>>>> maxIncomingMigrations='2', maxOutgoingMigrations='2', >>>>> convergenceSchedule='[init=[{name=setDowntime, params=[100]}], >>>>> stalling=[{limit=1, action={name=setDowntime, params=[150]}}, {limit=2, >>>>> action={name=setDowntime, params=[200]}}, {limit=3, >>>>> action={name=setDowntime, params=[300]}}, {limit=4, >>>>> action={name=setDowntime, params=[400]}}, {limit=6, >>>>> action={name=setDowntime, params=[500]}}, {limit=-1, action={name=abort, >>>>> params=[]}}]]'}), log id: 1840069c >>>>> 2018-02-12 16:49:06,724+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.vdsbroker.MigrateBrokerVDSCommand] (org.ovirt.thread.pool-6-thread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] FINISH, MigrateBrokerVDSCommand, >>>>> log id: 1840069c >>>>> 2018-02-12 16:49:06,732+01 INFO [org.ovirt.engine.core.vdsbroker.MigrateVDSCommand] >>>>> (org.ovirt.thread.pool-6-thread-49) [92b5af33-cb87-4142-b8fe-8b838dd7458e] >>>>> FINISH, MigrateVDSCommand, return: MigratingFrom, log id: 3702a9e0 >>>>> 2018-02-12 16:49:06,753+01 INFO [org.ovirt.engine.core.dal.db >>>>> broker.auditloghandling.AuditLogDirector] (org.ovirt.thread.pool-6-thread-49) >>>>> [92b5af33-cb87-4142-b8fe-8b838dd7458e] EVENT_ID: >>>>> VM_MIGRATION_START(62), Correlation ID: 92b5af33-cb87-4142-b8fe-8b838dd7458e, >>>>> Job ID: f4f54054-f7c8-4481-8eda-d5a15c383061, Call Stack: null, Custom >>>>> ID: null, Custom Event ID: -1, Message: Migration started (VM: >>>>> Oracle_PRIMARY, Source: ginger.local.systea.fr, Destination: >>>>> victor.local.systea.fr, User: admin at internal-authz). >>>>> ... >>>>> 2018-02-12 16:49:16,453+01 INFO [org.ovirt.engine.core.vdsbro >>>>> ker.monitoring.VmsStatisticsFetcher] (DefaultQuartzScheduler4) >>>>> [162a5bc3] Fetched 2 VMs from VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1' >>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>>> 2018-02-12 16:49:16,455+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler4) [162a5bc3] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>>> victor.local.systea.fr) ignoring it in the refresh until migration is >>>>> done >>>>> ... >>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474'(Oracle_PRIMARY) >>>>> was unexpectedly detected as 'MigratingTo' on VDS >>>>> 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'(victor.local.systea.fr) >>>>> (expected on 'd569c2dd-8f30-4878-8aea-858db285cf69') >>>>> 2018-02-12 16:49:31,484+01 INFO [org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer] >>>>> (DefaultQuartzScheduler5) [11a7619a] VM 'f7d4ec12-627a-4b83-b59e-886400d55474' >>>>> is migrating to VDS 'ce3938b1-b23f-4d22-840a-f17d7cd87bb1'( >>>>> victor.local.systea.fr) ignoring it in the refresh until migration is >>>>> done >>>>> >>>>> >>>>> >>>>> and so on, last lines repeated indefinitly for hours since we poweroff >>>>> the VM... >>>>> Is this something known ? Any idea about that ? >>>>> >>>>> Thanks >>>>> >>>>> Ovirt 4.1.6, updated last at feb-13. Gluster 3.12.1. >>>>> >>>>> -- >>>>> >>>>> Cordialement, >>>>> >>>>> *Frank Soyer * >>>>> >>>>> _______________________________________________ >>>>> Users mailing list >>>>> Users at ovirt.org >>>>> http://lists.ovirt.org/mailman/listinfo/users From nicolas at ecarnot.net Mon Feb 26 12:01:38 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Mon, 26 Feb 2018 13:01:38 +0100 Subject: [ovirt-users] Hosts firewall custom setup Message-ID: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> Hello, On oVirt 4.2.1.7, I'm trying to setup custom iptables rules as I'm doing since years with engine-config --set IPTablesConfigSiteCustom="blah blah blah". On my hosts, I can see in my hosts that /etc/sysconfig/iptables does contain the correct custom rules I added, but when manually checking with iptables -L, I don't see my rules active. On my hosts, I see that the iptables services is stopped and disabled, and that the firewalld service is up and running. That explains why iptables customization has no effect. In the engine setup, I see that /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf contains : OVESETUP_CONFIG/firewallManager=none:None I'm confused about this setting : when running engine-setup, I'm not sure to understand if answering yes to the question about the firewall will modify the engine, the hosts, or all of them? Actually, I'd like my engine to stay with a disabled firewall, but my hosts with an active one. Is it true to say that this is not an option and I have to answer yes, enable the firewall on the engine, allowing the OVESETUP_CONFIG/firewallManager option to be set up (to firewalld or iptables), thus allowing the spread of this setup towards the hosts? Thank you. -- Nicolas ECARNOT From sleviim at redhat.com Mon Feb 26 12:31:23 2018 From: sleviim at redhat.com (Shani Leviim) Date: Mon, 26 Feb 2018 14:31:23 +0200 Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: <1126422114.1812662.1519629600204.JavaMail.zimbra@cines.fr> References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1126422114.1812662.1519629600204.JavaMail.zimbra@cines.fr> Message-ID: Hi Lionel, The error message you've mentioned sounds like a UI error. Can you please attach your ui log? Also, on the data from 'images' table you've uploaded, can you describe which line is the relevant disk? Finally (for now), in case the snapshot was deleted, can you please validate it by viewing the output of: $ select * from snapshots; *Regards,* *Shani Leviim* On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec wrote: > Hi Shani, > thank you for helping me with your reply, > i juste make a little mistake on explanation. In fact it's the snapshot > does not exist anymore. This is the disk(s) relative to her wich still > exist, and perhaps LVM volume. > So can i delete manually this disk in database? what about the lvm volume? > Is it better to recreate disk sync data and destroy old one? > > > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Cc: "users" > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi Lionel, > > You can try to delete that snapshot directly from the database. > > In case of using psql [1], once you've logged in to your database, you can > run this query: > $ select * from snapshots where vm_id = ''; > This one would list the snapshots associated with a VM by its id. > > In case you don't have you vm_id, you can locate it by querying: > $ select * from vms where vm_name = 'nil'; > This one would show you some details about a VM by its name (including the > vm's id). > > Once you've found the relevant snapshot, you can delete it by running: > $ delete from snapshots where snapshot_id = ''; > This one would delete the desired snapshot from the database. > > Since it's a delete operation, I would suggest confirming the ids before > executing it. > > Hope you've found it useful! > > [1] > https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_Remote_ > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > *Regards,* > > *Shani Leviim* > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec wrote: > > > Hi, > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost without > > name or uuid, only information is size (see attachment). In the snapshot > > tab there is no trace about this disk. > > > > In database (table images) i found this : > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > | 2 | 4 | 17e26476-cecb-441d-a5f7-46ab3ef387ee | > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f | > > 1 | 2 > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > | 2 | 4 | bf834a91-c69f-4d2c-b639-116ed58296d8 | > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f | > > 1 | 2 > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > But i does not know which line is my disk. Is it possible to delete > > directly into database? > > Or is it better to dump my disk to another new and delete the "corrupted > > one"? > > > > Another thing, when i try to move the disk to another storage domain i > > always get "uncaght exeption occured ..." and no error in engine.log. > > > > > > Thank you for helping. > > > > -- > > Lionel Caignec > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From caignec at cines.fr Mon Feb 26 12:48:28 2018 From: caignec at cines.fr (Lionel Caignec) Date: Mon, 26 Feb 2018 13:48:28 +0100 (CET) Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1126422114.1812662.1519629600204.JavaMail.zimbra@cines.fr> Message-ID: <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> Hi 1) this is error message from ui.log 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-3) [] Permutation name: 8C01181C3B121D0AAE1312275CC96415 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] (default task-3) [] Uncaught exception: com.google.gwt.core.client.JavaScriptException: (TypeError) __gwt$exception: : Cannot read property 'F' of null at org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) at org.ovirt.engine.ui.uicommonweb.models.storage.DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:233) [frontend.jar:] at org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:233) [frontend.jar:] at org.ovirt.engine.ui.frontend.communication.OperationProcessor$2.$onSuccess(OperationProcessor.java:139) [frontend.jar:] at org.ovirt.engine.ui.frontend.communication.OperationProcessor$2.onSuccess(OperationProcessor.java:139) [frontend.jar:] at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:269) [frontend.jar:] at org.ovirt.engine.ui.frontend.communication.GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:269) [frontend.jar:] at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter.onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:] at com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:237) [gwt-servlet.jar:] at com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409) [gwt-servlet.jar:] at Unknown.eval(webadmin-0.js at 65) at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) [gwt-servlet.jar:] at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335) [gwt-servlet.jar:] at Unknown.eval(webadmin-0.js at 54) 2) This line seems to be about the bad disk : f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | 2748779069440 | 00000000-0000-0000-0000-000000000000 | 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c 3) Snapshot table is empty for the concerned vm_id. ----- Mail original ----- De: "Shani Leviim" ?: "Lionel Caignec" Cc: "users" Envoy?: Lundi 26 F?vrier 2018 13:31:23 Objet: Re: [ovirt-users] Ghost Snapshot Disk Hi Lionel, The error message you've mentioned sounds like a UI error. Can you please attach your ui log? Also, on the data from 'images' table you've uploaded, can you describe which line is the relevant disk? Finally (for now), in case the snapshot was deleted, can you please validate it by viewing the output of: $ select * from snapshots; *Regards,* *Shani Leviim* On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec wrote: > Hi Shani, > thank you for helping me with your reply, > i juste make a little mistake on explanation. In fact it's the snapshot > does not exist anymore. This is the disk(s) relative to her wich still > exist, and perhaps LVM volume. > So can i delete manually this disk in database? what about the lvm volume? > Is it better to recreate disk sync data and destroy old one? > > > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Cc: "users" > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi Lionel, > > You can try to delete that snapshot directly from the database. > > In case of using psql [1], once you've logged in to your database, you can > run this query: > $ select * from snapshots where vm_id = ''; > This one would list the snapshots associated with a VM by its id. > > In case you don't have you vm_id, you can locate it by querying: > $ select * from vms where vm_name = 'nil'; > This one would show you some details about a VM by its name (including the > vm's id). > > Once you've found the relevant snapshot, you can delete it by running: > $ delete from snapshots where snapshot_id = ''; > This one would delete the desired snapshot from the database. > > Since it's a delete operation, I would suggest confirming the ids before > executing it. > > Hope you've found it useful! > > [1] > https://www.ovirt.org/documentation/install-guide/appe-Preparing_a_Remote_ > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > *Regards,* > > *Shani Leviim* > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec wrote: > > > Hi, > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost without > > name or uuid, only information is size (see attachment). In the snapshot > > tab there is no trace about this disk. > > > > In database (table images) i found this : > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > | 2 | 4 | 17e26476-cecb-441d-a5f7-46ab3ef387ee | > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f | > > 1 | 2 > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > | 2 | 4 | bf834a91-c69f-4d2c-b639-116ed58296d8 | > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f | > > 1 | 2 > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > But i does not know which line is my disk. Is it possible to delete > > directly into database? > > Or is it better to dump my disk to another new and delete the "corrupted > > one"? > > > > Another thing, when i try to move the disk to another storage domain i > > always get "uncaght exeption occured ..." and no error in engine.log. > > > > > > Thank you for helping. > > > > -- > > Lionel Caignec > > > > _______________________________________________ > > Users mailing list > > Users at ovirt.org > > http://lists.ovirt.org/mailman/listinfo/users > > > > > From didi at redhat.com Mon Feb 26 13:03:54 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 26 Feb 2018 15:03:54 +0200 Subject: [ovirt-users] Hosts firewall custom setup In-Reply-To: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> References: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> Message-ID: On Mon, Feb 26, 2018 at 2:01 PM, Nicolas Ecarnot wrote: > Hello, > > On oVirt 4.2.1.7, I'm trying to setup custom iptables rules as I'm doing > since years with engine-config --set IPTablesConfigSiteCustom="blah blah > blah". > > On my hosts, I can see in my hosts that /etc/sysconfig/iptables does contain > the correct custom rules I added, but when manually checking with iptables > -L, I don't see my rules active. > > On my hosts, I see that the iptables services is stopped and disabled, and > that the firewalld service is up and running. > > That explains why iptables customization has no effect. Indeed. IIRC the type of firewall is now set per cluster or something like that, not sure about the details - adding Ondra. > > In the engine setup, I see that > /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf contains : > OVESETUP_CONFIG/firewallManager=none:None > > I'm confused about this setting : when running engine-setup, I'm not sure to > understand if answering yes to the question about the firewall will modify > the engine, the hosts, or all of them? Only the engine. > > Actually, I'd like my engine to stay with a disabled firewall, but my hosts > with an active one. So you should reply 'No' as you did in 'engine-setup', and handle iptables/firewalld on the engine after it's set up (upgraded), I think from the ui. > > Is it true to say that this is not an option and I have to answer yes, > enable the firewall on the engine, allowing the > OVESETUP_CONFIG/firewallManager option to be set up (to firewalld or > iptables), thus allowing the spread of this setup towards the hosts? No, they are unrelated. Best regards, -- Didi From cma at cmadams.net Mon Feb 26 13:26:43 2018 From: cma at cmadams.net (Chris Adams) Date: Mon, 26 Feb 2018 07:26:43 -0600 Subject: [ovirt-users] Network and disk inactive after 4.2.1 upgrade In-Reply-To: <8037634a-2e17-fa6f-76de-3f7fd61685d9@unibas.ch> References: <20180213150534.GA15147@cmadams.net> <8037634a-2e17-fa6f-76de-3f7fd61685d9@unibas.ch> Message-ID: <20180226132643.GA17035@cmadams.net> Once upon a time, Ars?ne Gschwind said: > After upgrading from 4.1.9 to 4.2.1 I had the same problem. > Had to reactivate network and disk on all VMs. Do you use the hosted engine? If so, how did you fix it? -- Chris Adams From sleviim at redhat.com Mon Feb 26 13:29:16 2018 From: sleviim at redhat.com (Shani Leviim) Date: Mon, 26 Feb 2018 15:29:16 +0200 Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1126422114.1812662.1519629600204.JavaMail.zimbra@cines.fr> <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> Message-ID: Hi, What is your engine version, please? I'm trying to reproduce your steps, for understanding better was is the cause for that error. Therefore, a full engine log is needed. Can you please attach it? Thanks, *Shani Leviim* On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec wrote: > Hi > > 1) this is error message from ui.log > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > server.gwt.OvirtRemoteLoggingService] (default task-3) [] Permutation > name: 8C01181C3B121D0AAE1312275CC96415 > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend.server.gwt.OvirtRemoteLoggingService] > (default task-3) [] Uncaught exception: com.google.gwt.core.client.JavaScriptException: > (TypeError) > __gwt$exception: : Cannot read property 'F' of null > at org.ovirt.engine.ui.uicommonweb.models.storage. > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) > at org.ovirt.engine.ui.uicommonweb.models.storage. > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) > at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess(Frontend.java:233) > [frontend.jar:] > at org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend.java:233) > [frontend.jar:] > at org.ovirt.engine.ui.frontend.communication. > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) > [frontend.jar:] > at org.ovirt.engine.ui.frontend.communication. > OperationProcessor$2.onSuccess(OperationProcessor.java:139) > [frontend.jar:] > at org.ovirt.engine.ui.frontend.communication. > GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider.java:269) > [frontend.jar:] > at org.ovirt.engine.ui.frontend.communication. > GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider.java:269) > [frontend.jar:] > at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter. > onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:] > at com.google.gwt.http.client.Request.$fireOnResponseReceived(Request.java:237) > [gwt-servlet.jar:] > at com.google.gwt.http.client.RequestBuilder$1.onReadyStateChange(RequestBuilder.java:409) > [gwt-servlet.jar:] > at Unknown.eval(webadmin-0.js at 65) > at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) > [gwt-servlet.jar:] > at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335) > [gwt-servlet.jar:] > at Unknown.eval(webadmin-0.js at 54) > > > 2) This line seems to be about the bad disk : > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > 3) Snapshot table is empty for the concerned vm_id. > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Cc: "users" > Envoy?: Lundi 26 F?vrier 2018 13:31:23 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi Lionel, > > The error message you've mentioned sounds like a UI error. > Can you please attach your ui log? > > Also, on the data from 'images' table you've uploaded, can you describe > which line is the relevant disk? > > Finally (for now), in case the snapshot was deleted, can you please > validate it by viewing the output of: > $ select * from snapshots; > > > > *Regards,* > > *Shani Leviim* > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec wrote: > > > Hi Shani, > > thank you for helping me with your reply, > > i juste make a little mistake on explanation. In fact it's the snapshot > > does not exist anymore. This is the disk(s) relative to her wich still > > exist, and perhaps LVM volume. > > So can i delete manually this disk in database? what about the lvm > volume? > > Is it better to recreate disk sync data and destroy old one? > > > > > > > > ----- Mail original ----- > > De: "Shani Leviim" > > ?: "Lionel Caignec" > > Cc: "users" > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > Hi Lionel, > > > > You can try to delete that snapshot directly from the database. > > > > In case of using psql [1], once you've logged in to your database, you > can > > run this query: > > $ select * from snapshots where vm_id = ''; > > This one would list the snapshots associated with a VM by its id. > > > > In case you don't have you vm_id, you can locate it by querying: > > $ select * from vms where vm_name = 'nil'; > > This one would show you some details about a VM by its name (including > the > > vm's id). > > > > Once you've found the relevant snapshot, you can delete it by running: > > $ delete from snapshots where snapshot_id = ''; > > This one would delete the desired snapshot from the database. > > > > Since it's a delete operation, I would suggest confirming the ids before > > executing it. > > > > Hope you've found it useful! > > > > [1] > > https://www.ovirt.org/documentation/install-guide/ > appe-Preparing_a_Remote_ > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > > > > *Regards,* > > > > *Shani Leviim* > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > wrote: > > > > > Hi, > > > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost without > > > name or uuid, only information is size (see attachment). In the > snapshot > > > tab there is no trace about this disk. > > > > > > In database (table images) i found this : > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > | 2 | 4 | 17e26476-cecb-441d-a5f7-46ab3ef387ee > | > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f > | > > > 1 | 2 > > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > | 2 | 4 | bf834a91-c69f-4d2c-b639-116ed58296d8 > | > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f > | > > > 1 | 2 > > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > > > > But i does not know which line is my disk. Is it possible to delete > > > directly into database? > > > Or is it better to dump my disk to another new and delete the > "corrupted > > > one"? > > > > > > Another thing, when i try to move the disk to another storage domain i > > > always get "uncaght exeption occured ..." and no error in engine.log. > > > > > > > > > Thank you for helping. > > > > > > -- > > > Lionel Caignec > > > > > > _______________________________________________ > > > Users mailing list > > > Users at ovirt.org > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at ecarnot.net Mon Feb 26 13:49:58 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Mon, 26 Feb 2018 14:49:58 +0100 Subject: [ovirt-users] Hosts firewall custom setup In-Reply-To: References: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> Message-ID: <13a2e235-8610-8ec2-1c3d-5b82ed5351f3@ecarnot.net> Le 26/02/2018 ? 14:03, Yedidyah Bar David a ?crit?: > On Mon, Feb 26, 2018 at 2:01 PM, Nicolas Ecarnot wrote: >> Hello, >> >> On oVirt 4.2.1.7, I'm trying to setup custom iptables rules as I'm doing >> since years with engine-config --set IPTablesConfigSiteCustom="blah blah >> blah". >> >> On my hosts, I can see in my hosts that /etc/sysconfig/iptables does contain >> the correct custom rules I added, but when manually checking with iptables >> -L, I don't see my rules active. >> >> On my hosts, I see that the iptables services is stopped and disabled, and >> that the firewalld service is up and running. >> >> That explains why iptables customization has no effect. > > Indeed. > > IIRC the type of firewall is now set per cluster or something like that, not > sure about the details - adding Ondra. Per cluster, one can indeed choose the firewall type. I suppose it translates on the hosts into the activation of the adequate service. But how do we add custom rules in case of firewalld type? On the hosts, I imagine that could translate into changes in : /etc/firewalld/zones/public.xml -- Nicolas ECARNOT From didi at redhat.com Mon Feb 26 14:00:18 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 26 Feb 2018 16:00:18 +0200 Subject: [ovirt-users] Hosts firewall custom setup In-Reply-To: <13a2e235-8610-8ec2-1c3d-5b82ed5351f3@ecarnot.net> References: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> <13a2e235-8610-8ec2-1c3d-5b82ed5351f3@ecarnot.net> Message-ID: On Mon, Feb 26, 2018 at 3:49 PM, Nicolas Ecarnot wrote: > Le 26/02/2018 ? 14:03, Yedidyah Bar David a ?crit : >> >> On Mon, Feb 26, 2018 at 2:01 PM, Nicolas Ecarnot >> wrote: >>> >>> Hello, >>> >>> On oVirt 4.2.1.7, I'm trying to setup custom iptables rules as I'm doing >>> since years with engine-config --set IPTablesConfigSiteCustom="blah blah >>> blah". >>> >>> On my hosts, I can see in my hosts that /etc/sysconfig/iptables does >>> contain >>> the correct custom rules I added, but when manually checking with >>> iptables >>> -L, I don't see my rules active. >>> >>> On my hosts, I see that the iptables services is stopped and disabled, >>> and >>> that the firewalld service is up and running. >>> >>> That explains why iptables customization has no effect. >> >> >> Indeed. >> >> IIRC the type of firewall is now set per cluster or something like that, >> not >> sure about the details - adding Ondra. > > > Per cluster, one can indeed choose the firewall type. > I suppose it translates on the hosts into the activation of the adequate > service. > But how do we add custom rules in case of firewalld type? Please see: https://ovirt.org/blog/2017/12/host-deploy-customization/ Best regards, > > On the hosts, I imagine that could translate into changes in : > /etc/firewalld/zones/public.xml > > -- > Nicolas ECARNOT -- Didi From mperina at redhat.com Mon Feb 26 14:06:07 2018 From: mperina at redhat.com (Martin Perina) Date: Mon, 26 Feb 2018 15:06:07 +0100 Subject: [ovirt-users] Hosts firewall custom setup In-Reply-To: <13a2e235-8610-8ec2-1c3d-5b82ed5351f3@ecarnot.net> References: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> <13a2e235-8610-8ec2-1c3d-5b82ed5351f3@ecarnot.net> Message-ID: On Mon, Feb 26, 2018 at 2:49 PM, Nicolas Ecarnot wrote: > Le 26/02/2018 ? 14:03, Yedidyah Bar David a ?crit : > >> On Mon, Feb 26, 2018 at 2:01 PM, Nicolas Ecarnot >> wrote: >> >>> Hello, >>> >>> On oVirt 4.2.1.7, I'm trying to setup custom iptables rules as I'm doing >>> since years with engine-config --set IPTablesConfigSiteCustom="blah blah >>> blah". >>> >>> On my hosts, I can see in my hosts that /etc/sysconfig/iptables does >>> contain >>> the correct custom rules I added, but when manually checking with >>> iptables >>> -L, I don't see my rules active. >>> >>> On my hosts, I see that the iptables services is stopped and disabled, >>> and >>> that the firewalld service is up and running. >>> >>> That explains why iptables customization has no effect. >>> >> >> Indeed. >> >> IIRC the type of firewall is now set per cluster or something like that, >> not >> sure about the details - adding Ondra. >> > > Per cluster, one can indeed choose the firewall type. > I suppose it translates on the hosts into the activation of the adequate > service. > But how do we add custom rules in case of firewalld type? > > On the hosts, I imagine that could translate into changes in : > /etc/firewalld/zones/public.xml > ?Please take a look at below RFE introducing firewalld support for host and blog post to read about new possibilities to customize host-deploy process (which also can be used for custom firewalld rules) in oVirt 4.2: https://bugzilla.redhat.com/show_bug.cgi?id=995362 https://www.ovirt.org/blog/2017/12/host-deploy-customization/? > -- > Nicolas ECARNOT > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Markus.Schaufler at ooe.gv.at Mon Feb 26 14:14:56 2018 From: Markus.Schaufler at ooe.gv.at (Markus.Schaufler at ooe.gv.at) Date: Mon, 26 Feb 2018 14:14:56 +0000 Subject: [ovirt-users] oVirt Hosts fail to Upgrade Message-ID: <9D6F18D2AC0D5245BE068C2BEBC069462852BD@msli01-202.res01.ads.ooe.local> Hi! I recently installed a 4-node Test-Cluster with FC storage for a PoC with a self-hosted-engine and some Test VMs running. I tried to upgrade the hosts via Webinterface via Maintainance -> Check for Upgrade -> Upgrade. They fail all with the same errors in the messages log on each host: Feb 26 15:04:31 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:31 432734 [2460]: s324 add_lockspace fail result -19 Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR Failed to read metadata from /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 151, in get_raw_stats#012 f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)#012OSError: [Errno 2] No such file or directory: '/var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69' Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Failed to read state.#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 88, in run#012 self._storage_broker.get_raw_stats()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 162, in get_raw_stats#012 .format(str(e)))#012RequestError: failed to read metadata: [Errno 2] No such file or directory: '/var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69' Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Failed to update state.#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 77, in run#012 if (self._status_broker._inquire_whiteboard_lock() or#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 183, in _inquire_whiteboard_lock#012 self.host_id, self._lease_file)#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 121, in host_id#012 raise ex.HostIdNotLockedError("Host id is not set")#012HostIdNotLockedError: Host id is not set Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR Failed to read metadata from /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 151, in get_raw_stats#012 f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)#012OSError: [Errno 2] No such file or directory: '/var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69' Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Failed to read state.#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 88, in run#012 self._storage_broker.get_raw_stats()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 162, in get_raw_stats#012 .format(str(e)))#012RequestError: failed to read metadata: [Errno 2] No such file or directory: '/var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69' Feb 26 15:04:36 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:36 432739 [422733]: open error -2 /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/4ba684c0-3d8d-4148-99cd-4f4c50a4714a/3f764582-cb12-4c4e-91d3-8bc13a5427d6 Feb 26 15:04:36 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:36 432739 [422733]: s325 open_disk /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/4ba684c0-3d8d-4148-99cd-4f4c50a4714a/3f764582-cb12-4c4e-91d3-8bc13a5427d6 error -2 Feb 26 15:04:37 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:37 432740 [6221]: s325 add_lockspace fail result -19 Feb 26 15:04:42 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:42 432745 [422771]: open error -2 /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/4ba684c0-3d8d-4148-99cd-4f4c50a4714a/3f764582-cb12-4c4e-91d3-8bc13a5427d6 Feb 26 15:04:42 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:42 432745 [422771]: s326 open_disk /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/4ba684c0-3d8d-4148-99cd-4f4c50a4714a/3f764582-cb12-4c4e-91d3-8bc13a5427d6 error -2 I've also attached some logfiles of one Host. Any thoughts on this? Markus Schaufler, MSc Amt der O?. Landesregierung Direktion Pr?sidium Abteilung Informationstechnologie Referat ST3 Server A-4021 Linz, K?rntnerstra?e 16 Tel.: +43 (0)732 7720 - 13138 Fax: +43 (0)732 7720 - 213255 email: markus.schaufler at ooe.gv.at Internet: www.land-oberoesterreich.gv.at DVR: 0069264 Der Austausch von Nachrichten mit o.a. Absender via e-mail dient ausschlie?lich Informationszwecken. Rechtsg?ltige Erkl?rungen d?rfen ?ber dieses Medium nur an das offizielle Postfach it.post at ooe.gv.at ?bermittelt werden. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logfiles.tar.gz Type: application/x-gzip Size: 1815317 bytes Desc: logfiles.tar.gz URL: From awels at redhat.com Mon Feb 26 14:17:18 2018 From: awels at redhat.com (Alexander Wels) Date: Mon, 26 Feb 2018 09:17:18 -0500 Subject: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? In-Reply-To: References: <2485548.xbxGyNV15t@awels> Message-ID: <2721966.jdeGbEF6W6@awels> On Sunday, February 25, 2018 1:11:08 AM EST Zip wrote: > Hi Alexander, > > If I try the following: > > > > > > > > > > I get the error in my browser console: > > Sun Feb 25 00:03:56 GMT-600 2018 > org.ovirt.engine.ui.webadmin.plugin.PluginManager SEVERE: Exception caught > while invoking event handler function [UiInit] for plugin [HelloWorld]: > Error: java.lang.IndexOutOfBoundsException webadmin:1:13517 > #dashboard-main> > > Sun Feb 25 00:03:56 GMT-600 2018 > org.ovirt.engine.ui.webadmin.plugin.PluginManager WARNING: Plugin > [HelloWorld] removed from service due to failure > > > However if I remove the line: > > api.addMainTab('FooTab','xtab123','http://foo.com/?); > > And replace it with something simple like: > > alert(?Test 123?); > > There are no errors and the alert fires as it should. > > > Any ideas of what I might be missing? > > I am running oVirt 4.2.1 on CentOS ? Hosted Engine setup with 1 host for > testing. > > Thanks > > Zip > Well you found a bug, I will be posting a patch soon. To bypass the problem add the following: api.addMainTab('FooTab','xtab123','http://foo.com/, {priority: N}); Where N is a number between 0 and 5 This will determine where the new menu item will show up in the menu, 0 being at the top below the dashboard, and 5 being right above Events. Normally it is supposed to simply add to the end, however due to the bug it won't. > > From: Alexander Wels > > Date: Monday, February 19, 2018 at 7:54 AM > > To: "users at ovirt.org" > > Cc: Preston > > Subject: Re: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? > > > > On Friday, February 16, 2018 6:31:10 PM EST Zip wrote: > >> Are there any updated docs for the WebUI Plugins API? > > > > Unfortunately no, I haven't had a chance to create updated documentation. > > However the first two links are mostly still accurate as we haven't done > > any major changes to the API. > > > > Some things to note that are different from the API documentation in > > https:// www.ovirt.org/develop/release-management/features/ux/uiplugins/ > > for 4.2: > > > > - alignRight no longer has any effect, as the UI in 4.2 no longer respects > > it. - none of the systemTreeNode selection code does anything (since > > there is no more system tree) > > - As noted in the documentation itself the RestApiSessionAcquired is no > > longer available as we have a proper SSO mechanism that you can utilize > > at this point. > > - Main Tabs are now called Main Views (but the api still calls them main > > tabs, so use the apis described). And sub tabs are now called detail > > tabs, but the same thing the API hasn't changed the naming convention so > > use subTabs. - mainTabActionButton location property no longer has any > > meaning and is ignored. > > > > That is it I think, we tried to make it so existing plugins would remain > > working even if some options no longer mean anything. > > > >> I have found the following which all appear to be old and no longer > >> working? > >> > >> https://www.ovirt.org/documentation/admin-guide/appe-oVirt_User_Interfac > >> e_Pl ugins/ > >> https://www.ovirt.org/develop/release-management/features/ux/uiplugins/ > >> http://resources.ovirt.org/old-site-files/UI_Plugins_at_oVirt_Workshop_S > >> unny vale_2013.pdf > >> > >> Thanks > >> > >> Zip From caignec at cines.fr Mon Feb 26 14:18:11 2018 From: caignec at cines.fr (Lionel Caignec) Date: Mon, 26 Feb 2018 15:18:11 +0100 (CET) Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1126422114.1812662.1519629600204.JavaMail.zimbra@cines.fr> <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> <280580777.1830731.1519652234852.JavaMail.zimbra@cines.fr> Message-ID: <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> Ok so i reply myself, Version is 4.1.7.6-1 I just delete manually a snapshot previously created. But this is an io intensive vm, whit big disk (2,5To, and 5To). For the log, i cannot paste all my log on public list security reason, i will send you full in private. Here is an extract relevant to my error engine.log-20180210:2018-02-09 23:00:03,200+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-312) [44402a8c-3196-43f0-ba33-307ea78e6f49] EVENT_ID: USER_CREATE_SNAPSHOT(45), Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' creation for VM 'zz_nil' was initiated by snap_user at internal. engine.log-20180210:2018-02-09 23:01:06,578+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler5) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_SUCCESS(68), Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' creation for VM 'zz_nil' has been completed. engine.log-20180220:2018-02-19 17:01:23,800+01 INFO [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (default task-113) [] EVENT_ID: USER_REMOVE_SNAPSHOT(342), Correlation ID: 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de-3659ac0765da, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' deletion for VM 'zz_nil' was initiated by acaignec at ldap-cines-authz. engine.log-20180221:2018-02-20 22:24:45,174+01 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (DefaultQuartzScheduler6) [06a9efa4-1b80-4021-bf3e-41ecebe58a88] EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID: 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de-3659ac0765da, Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to delete snapshot 'AUTO_7D_zz_nil_20180209_220002' for VM 'zz_nil'. 2018-02-20 22:24:46,266+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler3) [516079c3] SPMAsyncTask::PollTask: Polling task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') returned status 'finished', result 'success'. 2018-02-20 22:24:46,267+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] (DefaultQuartzScheduler3) [516079c3] BaseAsyncTask::onTaskEndSuccess: Task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') ended successfully. 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endActionIfNecessary: All tasks of command 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6' has ended -> executing 'endAction' 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endAction: Ending action for '1' tasks (command ID: 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6'): calling endAction '. 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-20) [516079c3] CommandAsyncTask::endCommandAction [within thread] context: Attempting to endAction 'DestroyImage', 2018-02-20 22:24:46,269+01 ERROR [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] (org.ovirt.thread.pool-6-thread-20) [516079c3] [within thread]: endAction for action type DestroyImage threw an exception.: java.lang.NullPointerException at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper.endAction(CoCoAsyncTaskHelper.java:335) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl.endAction(CommandCoordinatorImpl.java:340) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.endCommandAction(CommandAsyncTask.java:154) [bll.jar:] at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$endActionIfNecessary$0(CommandAsyncTask.java:106) [bll.jar:] at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:] at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [rt.jar:1.8.0_161] at java.util.concurrent.FutureTask.run(FutureTask.java:266) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [rt.jar:1.8.0_161] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [rt.jar:1.8.0_161] at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] ----- Mail original ----- De: "Shani Leviim" ?: "Lionel Caignec" Envoy?: Lundi 26 F?vrier 2018 14:42:38 Objet: Re: [ovirt-users] Ghost Snapshot Disk Yes, please. Can you detail a bit more regarding the actions you've done? I'm assuming that since the snapshot had no description, trying to operate it caused the nullPointerException you've got. But I want to examine what was the cause for that. Also, can you please answer back to the list? *Regards,* *Shani Leviim* On Mon, Feb 26, 2018 at 3:37 PM, Lionel Caignec wrote: > Version is 4.1.7.6-1 > > Do you want the log from the day i delete snapshot? > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Cc: "users" > Envoy?: Lundi 26 F?vrier 2018 14:29:16 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Hi, > > What is your engine version, please? > I'm trying to reproduce your steps, for understanding better was is the > cause for that error. Therefore, a full engine log is needed. > Can you please attach it? > > Thanks, > > > *Shani Leviim* > > On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec wrote: > > > Hi > > > > 1) this is error message from ui.log > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > server.gwt.OvirtRemoteLoggingService] (default task-3) [] Permutation > > name: 8C01181C3B121D0AAE1312275CC96415 > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > server.gwt.OvirtRemoteLoggingService] > > (default task-3) [] Uncaught exception: com.google.gwt.core.client. > JavaScriptException: > > (TypeError) > > __gwt$exception: : Cannot read property 'F' of null > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) > > at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess( > Frontend.java:233) > > [frontend.jar:] > > at org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend. > java:233) > > [frontend.jar:] > > at org.ovirt.engine.ui.frontend.communication. > > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) > > [frontend.jar:] > > at org.ovirt.engine.ui.frontend.communication. > > OperationProcessor$2.onSuccess(OperationProcessor.java:139) > > [frontend.jar:] > > at org.ovirt.engine.ui.frontend.communication. > > GWTRPCCommunicationProvider$5$1.$onSuccess(GWTRPCCommunicationProvider. > java:269) > > [frontend.jar:] > > at org.ovirt.engine.ui.frontend.communication. > > GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider. > java:269) > > [frontend.jar:] > > at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter. > > onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:] > > at com.google.gwt.http.client.Request.$fireOnResponseReceived( > Request.java:237) > > [gwt-servlet.jar:] > > at com.google.gwt.http.client.RequestBuilder$1. > onReadyStateChange(RequestBuilder.java:409) > > [gwt-servlet.jar:] > > at Unknown.eval(webadmin-0.js at 65) > > at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) > > [gwt-servlet.jar:] > > at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335) > > [gwt-servlet.jar:] > > at Unknown.eval(webadmin-0.js at 54) > > > > > > 2) This line seems to be about the bad disk : > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > 3) Snapshot table is empty for the concerned vm_id. > > > > ----- Mail original ----- > > De: "Shani Leviim" > > ?: "Lionel Caignec" > > Cc: "users" > > Envoy?: Lundi 26 F?vrier 2018 13:31:23 > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > Hi Lionel, > > > > The error message you've mentioned sounds like a UI error. > > Can you please attach your ui log? > > > > Also, on the data from 'images' table you've uploaded, can you describe > > which line is the relevant disk? > > > > Finally (for now), in case the snapshot was deleted, can you please > > validate it by viewing the output of: > > $ select * from snapshots; > > > > > > > > *Regards,* > > > > *Shani Leviim* > > > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec > wrote: > > > > > Hi Shani, > > > thank you for helping me with your reply, > > > i juste make a little mistake on explanation. In fact it's the snapshot > > > does not exist anymore. This is the disk(s) relative to her wich still > > > exist, and perhaps LVM volume. > > > So can i delete manually this disk in database? what about the lvm > > volume? > > > Is it better to recreate disk sync data and destroy old one? > > > > > > > > > > > > ----- Mail original ----- > > > De: "Shani Leviim" > > > ?: "Lionel Caignec" > > > Cc: "users" > > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > Hi Lionel, > > > > > > You can try to delete that snapshot directly from the database. > > > > > > In case of using psql [1], once you've logged in to your database, you > > can > > > run this query: > > > $ select * from snapshots where vm_id = ''; > > > This one would list the snapshots associated with a VM by its id. > > > > > > In case you don't have you vm_id, you can locate it by querying: > > > $ select * from vms where vm_name = 'nil'; > > > This one would show you some details about a VM by its name (including > > the > > > vm's id). > > > > > > Once you've found the relevant snapshot, you can delete it by running: > > > $ delete from snapshots where snapshot_id = ''; > > > This one would delete the desired snapshot from the database. > > > > > > Since it's a delete operation, I would suggest confirming the ids > before > > > executing it. > > > > > > Hope you've found it useful! > > > > > > [1] > > > https://www.ovirt.org/documentation/install-guide/ > > appe-Preparing_a_Remote_ > > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > > > > > > > *Regards,* > > > > > > *Shani Leviim* > > > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > > wrote: > > > > > > > Hi, > > > > > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost > without > > > > name or uuid, only information is size (see attachment). In the > > snapshot > > > > tab there is no trace about this disk. > > > > > > > > In database (table images) i found this : > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > | 2 | 4 | 17e26476-cecb-441d-a5f7- > 46ab3ef387ee > > | > > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f > > | > > > > 1 | 2 > > > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > | 2 | 4 | bf834a91-c69f-4d2c-b639- > 116ed58296d8 > > | > > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f > > | > > > > 1 | 2 > > > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > > > > > > > But i does not know which line is my disk. Is it possible to delete > > > > directly into database? > > > > Or is it better to dump my disk to another new and delete the > > "corrupted > > > > one"? > > > > > > > > Another thing, when i try to move the disk to another storage > domain i > > > > always get "uncaght exeption occured ..." and no error in engine.log. > > > > > > > > > > > > Thank you for helping. > > > > > > > > -- > > > > Lionel Caignec > > > > > > > > _______________________________________________ > > > > Users mailing list > > > > Users at ovirt.org > > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > > From plord at intricatenetworks.com Mon Feb 26 14:50:37 2018 From: plord at intricatenetworks.com (Zip) Date: Mon, 26 Feb 2018 08:50:37 -0600 Subject: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? In-Reply-To: <2721966.jdeGbEF6W6@awels> References: <2485548.xbxGyNV15t@awels> <2721966.jdeGbEF6W6@awels> Message-ID: Thanks Alexander, This works: api.addMainTab('oVirtTab', 'ovirt-tab', ?http://www.something.com', {priority: 5}); Can you advise how to get the api.addSubTab to work? api.addSubTab('ovirt-tab', 'Test 123', 'test-123', '#?); I tried the above and many other combinations, no errors, just doesnt work. Maybe I am wrong on what it does? I am looking to add a submenu ? same as the current 4.2.1 UI shows Network and then(Vnic Profiles) (Networks) as submenus. Thanks Zip > > > On Sunday, February 25, 2018 1:11:08 AM EST Zip wrote: >> Hi Alexander, >> >> If I try the following: >> >> >> >> >> >> >> >> >> >> I get the error in my browser console: >> >> Sun Feb 25 00:03:56 GMT-600 2018 >> org.ovirt.engine.ui.webadmin.plugin.PluginManager SEVERE: Exception caught >> while invoking event handler function [UiInit] for plugin [HelloWorld]: >> Error: java.lang.IndexOutOfBoundsException webadmin:1:13517 >> > #dashboard-main> >> >> Sun Feb 25 00:03:56 GMT-600 2018 >> org.ovirt.engine.ui.webadmin.plugin.PluginManager WARNING: Plugin >> [HelloWorld] removed from service due to failure >> >> >> However if I remove the line: >> >> api.addMainTab('FooTab','xtab123','http://foo.com/?); >> >> And replace it with something simple like: >> >> alert(?Test 123?); >> >> There are no errors and the alert fires as it should. >> >> >> Any ideas of what I might be missing? >> >> I am running oVirt 4.2.1 on CentOS ? Hosted Engine setup with 1 host for >> testing. >> >> Thanks >> >> Zip >> > > Well you found a bug, I will be posting a patch soon. To bypass the problem > add the following: > > api.addMainTab('FooTab','xtab123','http://foo.com/, {priority: N}); > > Where N is a number between 0 and 5 > > This will determine where the new menu item will show up in the menu, 0 being > at the top below the dashboard, and 5 being right above Events. Normally it is > supposed to simply add to the end, however due to the bug it won't. > >>> > From: Alexander Wels >>> > Date: Monday, February 19, 2018 at 7:54 AM >>> > To: "users at ovirt.org" >>> > Cc: Preston >>> > Subject: Re: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? >>> > >>> > On Friday, February 16, 2018 6:31:10 PM EST Zip wrote: >>>> >> Are there any updated docs for the WebUI Plugins API? >>> > >>> > Unfortunately no, I haven't had a chance to create updated documentation. >>> > However the first two links are mostly still accurate as we haven't done >>> > any major changes to the API. >>> > >>> > Some things to note that are different from the API documentation in >>> > https:// www.ovirt.org/develop/release-management/features/ux/uiplugins/ >>> > for 4.2: >>> > >>> > - alignRight no longer has any effect, as the UI in 4.2 no longer >>> respects >>> > it. - none of the systemTreeNode selection code does anything (since >>> > there is no more system tree) >>> > - As noted in the documentation itself the RestApiSessionAcquired is no >>> > longer available as we have a proper SSO mechanism that you can utilize >>> > at this point. >>> > - Main Tabs are now called Main Views (but the api still calls them main >>> > tabs, so use the apis described). And sub tabs are now called detail >>> > tabs, but the same thing the API hasn't changed the naming convention so >>> > use subTabs. - mainTabActionButton location property no longer has any >>> > meaning and is ignored. >>> > >>> > That is it I think, we tried to make it so existing plugins would remain >>> > working even if some options no longer mean anything. >>> > >>>> >> I have found the following which all appear to be old and no longer >>>> >> working? >>>> >> >>>> >> >>>> https://www.ovirt.org/documentation/admin-guide/appe-oVirt_User_Interfac >>>> >> e_Pl ugins/ >>>> >> >>>> https://www.ovirt.org/develop/release-management/features/ux/uiplugins/ >>>> >> >>>> http://resources.ovirt.org/old-site-files/UI_Plugins_at_oVirt_Workshop_S >>>> >> unny vale_2013.pdf >>>> >> >>>> >> Thanks >>>> >> >>>> >> Zip > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From awels at redhat.com Mon Feb 26 14:59:10 2018 From: awels at redhat.com (Alexander Wels) Date: Mon, 26 Feb 2018 09:59:10 -0500 Subject: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? In-Reply-To: References: <2721966.jdeGbEF6W6@awels> Message-ID: <4308420.J4MvGUoxoT@awels> On Monday, February 26, 2018 9:50:37 AM EST Zip wrote: > Thanks Alexander, > > This works: > > api.addMainTab('oVirtTab', 'ovirt-tab', ?http://www.something.com', > {priority: 5}); > > Can you advise how to get the api.addSubTab to work? > > api.addSubTab('ovirt-tab', 'Test 123', 'test-123', '#?); > > I tried the above and many other combinations, no errors, just doesnt work. > Maybe I am wrong on what it does? > > I am looking to add a submenu ? same as the current 4.2.1 UI shows Network > and then(Vnic Profiles) (Networks) as submenus. > > Thanks > > Zip > So api.addSubTab works on adding a 'detail' tab, aka when you click on lets say the VMs name, and go to the detail view, addSubTab will add another tab in there. What you want to do, is currently NOT possible. I have a TODO in my long list of items to add that capability to the UI plugin API. When the API was designed the secondary menus like Networks didn't exist, it was all main tabs or sub tabs. The primary and secondary menus are just a way to organize the main views instead of having a giant list of menu items. For now I would just put it in a primary menu until I get a chance to update the api to allow for you to add it to a secondary menu. > > On Sunday, February 25, 2018 1:11:08 AM EST Zip wrote: > >> Hi Alexander, > >> > >> If I try the following: > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> I get the error in my browser console: > >> > >> Sun Feb 25 00:03:56 GMT-600 2018 > >> org.ovirt.engine.ui.webadmin.plugin.PluginManager SEVERE: Exception > >> caught > >> while invoking event handler function [UiInit] for plugin [HelloWorld]: > >> Error: java.lang.IndexOutOfBoundsException webadmin:1:13517 > >> >> n_US #dashboard-main> > >> > >> Sun Feb 25 00:03:56 GMT-600 2018 > >> org.ovirt.engine.ui.webadmin.plugin.PluginManager WARNING: Plugin > >> [HelloWorld] removed from service due to failure > >> > >> However if I remove the line: > >> api.addMainTab('FooTab','xtab123','http://foo.com/?); > >> > >> And replace it with something simple like: > >> > >> alert(?Test 123?); > >> > >> There are no errors and the alert fires as it should. > >> > >> > >> Any ideas of what I might be missing? > >> > >> I am running oVirt 4.2.1 on CentOS ? Hosted Engine setup with 1 host for > >> testing. > >> > >> Thanks > >> > >> Zip > > > > Well you found a bug, I will be posting a patch soon. To bypass the > > problem > > add the following: > > > > api.addMainTab('FooTab','xtab123','http://foo.com/, {priority: N}); > > > > Where N is a number between 0 and 5 > > > > This will determine where the new menu item will show up in the menu, 0 > > being at the top below the dashboard, and 5 being right above Events. > > Normally it is supposed to simply add to the end, however due to the bug > > it won't.> > >>> > From: Alexander Wels > >>> > Date: Monday, February 19, 2018 at 7:54 AM > >>> > To: "users at ovirt.org" > >>> > Cc: Preston > >>> > Subject: Re: [ovirt-users] oVirt 4.2 WebUI Plugin API Docs? > >>> > > >>> > On Friday, February 16, 2018 6:31:10 PM EST Zip wrote: > >>>> >> Are there any updated docs for the WebUI Plugins API? > >>> > > >>> > Unfortunately no, I haven't had a chance to create updated > >>> > documentation. > >>> > However the first two links are mostly still accurate as we haven't > >>> > done > >>> > any major changes to the API. > >>> > > >>> > Some things to note that are different from the API documentation in > >>> > https:// > >>> > www.ovirt.org/develop/release-management/features/ux/uiplugins/ > >>> > for 4.2: > >>> > > >>> > - alignRight no longer has any effect, as the UI in 4.2 no longer > >>> > >>> respects > >>> > >>> > it. - none of the systemTreeNode selection code does anything (since > >>> > there is no more system tree) > >>> > - As noted in the documentation itself the RestApiSessionAcquired is > >>> > no > >>> > longer available as we have a proper SSO mechanism that you can > >>> > utilize > >>> > at this point. > >>> > - Main Tabs are now called Main Views (but the api still calls them > >>> > main > >>> > tabs, so use the apis described). And sub tabs are now called detail > >>> > tabs, but the same thing the API hasn't changed the naming convention > >>> > so > >>> > use subTabs. - mainTabActionButton location property no longer has > >>> > any > >>> > meaning and is ignored. > >>> > > >>> > That is it I think, we tried to make it so existing plugins would > >>> > remain > >>> > working even if some options no longer mean anything. > >>> > > >>>> >> I have found the following which all appear to be old and no > >>>> >> longer > >>>> >> working? > >>>> > >>>> https://www.ovirt.org/documentation/admin-guide/appe-oVirt_User_Interfa > >>>> c > >>>> > >>>> >> e_Pl ugins/ > >>>> > >>>> https://www.ovirt.org/develop/release-management/features/ux/uiplugins/ > >>>> > >>>> http://resources.ovirt.org/old-site-files/UI_Plugins_at_oVirt_Workshop_ > >>>> S > >>>> > >>>> >> unny vale_2013.pdf > >>>> >> > >>>> >> Thanks > >>>> >> > >>>> >> Zip From tbaror at gmail.com Fri Feb 23 16:33:58 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Fri, 23 Feb 2018 18:33:58 +0200 Subject: [ovirt-users] Before migrating to Ovirt Message-ID: Hello Ovirt users, Currently we haveing 4 Xen pools in our organization each pool have 8 servers. Due to new 7.3 version change ,we plan to migrate our upcoming 5th pool to Ovirt, the decision to do POC migration to Ovirt is from lots of Xen users that suggested Ovirt migration better and mature product migrate to it. I started to test Ovirt currently only with one server with engine installed, my question regarding Ovirt sine i am really newbie with that system , is , is it possible to have multiple engine on it ? What happen if the engine server crash with some reason? can i load it on another cluster server? Please advice Thanks -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Sat Feb 24 17:41:48 2018 From: recreationh at gmail.com (Terry hey) Date: Sun, 25 Feb 2018 01:41:48 +0800 Subject: [ovirt-users] VM is locked, servlet , and SpiceVersion.txt problem In-Reply-To: References: Message-ID: Thank for your reply. OK, then do you know how to delete the lock virtual machine? Because i cannot find the locked VM by using "./unlock_entity.sh". I cannot do anything on it. Thank you. 2018-02-23 20:36 GMT+08:00 Tomas Jelinek : > > > On Fri, Feb 23, 2018 at 11:31 AM, Terry hey wrote: > >> Hello everyone! >> Thank for your time to analyize my problem. Totally, i have two question. >> I encountered vm image lock problem. The following action is what i have >> to and make the vm image locked. >> First, i imported a vm. Since it is take too long time to import and the >> engine.log always repeatly said it was waiting child command id. >> "2018-02-23 16:37:46,603+08 INFO [org.ovirt.engine.core.bll.Co >> ncurrentChildCommandsExecutionCallback] (EE-ManagedThreadFactory-engineScheduled-Thread-16) >> [1c44d543-4dcc-429d-a172-386cc860afe0] Command 'ImportVm' (id: >> '09718bd2-797d-4323-b1ad-1a85604543c3') waiting on child command id: >> '9e285b2d-c0c7-4a75-8c70-b619b45c6855' type:'CopyImageGroup' to complete >> " >> So,I thought the operation was not normal. So, >> 1. I use "./unlock_entity.sh" to unlock the virtual disk of the vm. >> 2. Virtual disk was unlocked but vm was still locked. Therefore, i use >> "./unlock_entity.sh" to show locked vm. But there was nothing. >> 3. Then i used "./taskcleaner.sh" to clean all task. But nothing happen. >> >> Q1: So, now, i would like to ask how to unlock the vm image so that i can >> delete or use it. >> >> Q2: In addition, there are two error or warning appeared in engine.log >> 1. 2018-02-23 09:57:19,495+08 WARN [org.ovirt.engine.core.utils.servlet.ServletUtils] >> (default task-272) [] File'/usr/share/ovirt-engine/u >> i-plugins/dashboard-resources/css/main-tab.d3769419.css' is 2839039 >> bytes long. Please reconsider using this servlet for files larger than >> 1048576 bytes. >> 2. 2018-02-23 09:47:39,656+08 ERROR [org.ovirt.engine.core.utils.servlet.ServletUtils] >> (default task-193) [] Can't read file '/usr/share/ovirt-engine/files/spice/SpiceVersion.txt' >> for request '/ovirt-engine/services/files/spice/SpiceVersion.txt', will >> send a 404 error response. >> Do you guys have any idea what do they mean? >> > > This is pretty cool :) The SpiceVersion.txt has been used years ago for > the ActiveX SPICE client. This client is not in oVirt for ages, but we > still have the handling of this file in the code forgotten. > > I have opened a bug to clean this up: https://bugzilla.redhat.com/ > show_bug.cgi?id=1548407 > But don't worry about this error, it has no effect on the function. > > >> >> I really appreciate you help. Thank you! >> >> Regards >> Terry >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Markus.Schaufler at ooe.gv.at Mon Feb 26 14:11:12 2018 From: Markus.Schaufler at ooe.gv.at (Markus.Schaufler at ooe.gv.at) Date: Mon, 26 Feb 2018 14:11:12 +0000 Subject: [ovirt-users] oVirt Hosts fail to Upgrade Message-ID: <9D6F18D2AC0D5245BE068C2BEBC06946285293@msli01-202.res01.ads.ooe.local> Hi! I recently installed a 4-node Test-Cluster with FC storage for a PoC with a self-hosted-engine and some Test VMs running. I tried to upgrade the hosts via Webinterface via Maintainance -> Check for Upgrade -> Upgrade. They fail all with the same errors in the messages log on each host: Feb 26 15:04:31 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:31 432734 [2460]: s324 add_lockspace fail result -19 Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR Failed to read metadata from /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 151, in get_raw_stats#012 f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)#012OSError: [Errno 2] No such file or directory: '/var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69' Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Failed to read state.#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 88, in run#012 self._storage_broker.get_raw_stats()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 162, in get_raw_stats#012 .format(str(e)))#012RequestError: failed to read metadata: [Errno 2] No such file or directory: '/var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69' Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Failed to update state.#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 77, in run#012 if (self._status_broker._inquire_whiteboard_lock() or#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 183, in _inquire_whiteboard_lock#012 self.host_id, self._lease_file)#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 121, in host_id#012 raise ex.HostIdNotLockedError("Host id is not set")#012HostIdNotLockedError: Host id is not set Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.storage_broker.StorageBroker ERROR Failed to read metadata from /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 151, in get_raw_stats#012 f = os.open(path, direct_flag | os.O_RDONLY | os.O_SYNC)#012OSError: [Errno 2] No such file or directory: '/var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69' Feb 26 15:04:36 VIGT01-101 journal: ovirt-ha-broker ovirt_hosted_engine_ha.broker.status_broker.StatusBroker.Update ERROR Failed to read state.#012Traceback (most recent call last):#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/status_broker.py", line 88, in run#012 self._storage_broker.get_raw_stats()#012 File "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/storage_broker.py", line 162, in get_raw_stats#012 .format(str(e)))#012RequestError: failed to read metadata: [Errno 2] No such file or directory: '/var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/f4ac3b2b-5219-4e22-b6c9-1520ce1ce3e0/0977a863-90b0-4ddb-8559-ed7770c38f69' Feb 26 15:04:36 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:36 432739 [422733]: open error -2 /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/4ba684c0-3d8d-4148-99cd-4f4c50a4714a/3f764582-cb12-4c4e-91d3-8bc13a5427d6 Feb 26 15:04:36 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:36 432739 [422733]: s325 open_disk /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/4ba684c0-3d8d-4148-99cd-4f4c50a4714a/3f764582-cb12-4c4e-91d3-8bc13a5427d6 error -2 Feb 26 15:04:37 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:37 432740 [6221]: s325 add_lockspace fail result -19 Feb 26 15:04:42 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:42 432745 [422771]: open error -2 /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/4ba684c0-3d8d-4148-99cd-4f4c50a4714a/3f764582-cb12-4c4e-91d3-8bc13a5427d6 Feb 26 15:04:42 VIGT01-101 sanlock[2444]: 2018-02-26 15:04:42 432745 [422771]: s326 open_disk /var/run/vdsm/storage/a4647768-2c33-4318-9111-ca995fcc03c8/4ba684c0-3d8d-4148-99cd-4f4c50a4714a/3f764582-cb12-4c4e-91d3-8bc13a5427d6 error -2 I've also attached some logfiles of one Host. Any thoughts on this? Markus Schaufler, MSc Amt der O?. Landesregierung Direktion Pr?sidium Abteilung Informationstechnologie Referat ST3 Server A-4021 Linz, K?rntnerstra?e 16 Tel.: +43 (0)732 7720 - 13138 Fax: +43 (0)732 7720 - 213255 email: markus.schaufler at ooe.gv.at Internet: www.land-oberoesterreich.gv.at DVR: 0069264 Der Austausch von Nachrichten mit o.a. Absender via e-mail dient ausschlie?lich Informationszwecken. Rechtsg?ltige Erkl?rungen d?rfen ?ber dieses Medium nur an das offizielle Postfach it.post at ooe.gv.at ?bermittelt werden. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: logfiles.tar.gz Type: application/x-gzip Size: 1815317 bytes Desc: logfiles.tar.gz URL: From didi at redhat.com Mon Feb 26 15:13:37 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Mon, 26 Feb 2018 17:13:37 +0200 Subject: [ovirt-users] Before migrating to Ovirt In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 6:33 PM, Tal Bar-Or wrote: > Hello Ovirt users, > > Currently we haveing 4 Xen pools in our organization each pool have 8 > servers. > Due to new 7.3 version change ,we plan to migrate our upcoming 5th pool to > Ovirt, the decision to do POC migration to Ovirt is from lots of Xen users > that suggested Ovirt migration better and mature product migrate to it. > > I started to test Ovirt currently only with one server with engine > installed, my question regarding Ovirt sine i am really newbie with that > system , is , is it possible to have multiple engine on it ? > What happen if the engine server crash with some reason? can i load it on > another cluster server? > Please advice The standard way to have HA for oVirt engine is to set it up as a self-hosted-engine and have more than one host in the hosted-engine cluster. We do know that there are people doing HA using other means, based on external HA software (heartbeat), but that's more expensive (in hardware, work, and perhaps software, depends on what you use) - I'd suggest doing that only if your organization already has in-house expertise and experience with such software. Good luck and best regards, -- Didi From dyasny at gmail.com Mon Feb 26 15:13:50 2018 From: dyasny at gmail.com (Dan Yasny) Date: Mon, 26 Feb 2018 10:13:50 -0500 Subject: [ovirt-users] Before migrating to Ovirt In-Reply-To: References: Message-ID: On Fri, Feb 23, 2018 at 11:33 AM, Tal Bar-Or wrote: > Hello Ovirt users, > > Currently we haveing 4 Xen pools in our organization each pool have 8 > servers. > Due to new 7.3 version change ,we plan to migrate our upcoming 5th pool to > Ovirt, the decision to do POC migration to Ovirt is from lots of Xen users > that suggested Ovirt migration better and mature product migrate to it. > > I started to test Ovirt currently only with one server with engine > installed, my question regarding Ovirt sine i am really newbie with that > system , is , is it possible to have multiple engine on it ? > No, but you can either cluster the engine in an HA cluster, run backup frequently (I've had environments where I did an engine dump every hour without any issues), or use self hosted engine, which is basically the engine as a VM, so if a host goes down, it gets started on another. What you cannot do is have multiple engines load-balancing the work, like openstack controllers for example. > What happen if the engine server crash with some reason? can i load it on > another cluster server? > If the engine goes down, the hosts and VMs keep working, so you have time to restore from backup or fix the engine without outages, besides a management outage of course. If you are running hosted engine, it will simply get started on another eligible host. > Please advice > > Thanks > > -- > Tal Bar-or > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fromani at redhat.com Mon Feb 26 16:12:17 2018 From: fromani at redhat.com (Francesco Romani) Date: Mon, 26 Feb 2018 17:12:17 +0100 Subject: [ovirt-users] Network and disk inactive after 4.2.1 upgrade In-Reply-To: <20180213150534.GA15147@cmadams.net> References: <20180213150534.GA15147@cmadams.net> Message-ID: <763f1316-6c9e-0927-7e5a-c4b6c3231c38@redhat.com> On 02/13/2018 04:05 PM, Chris Adams wrote: > I upgraded my dev cluster from 4.2.0 to 4.2.1 yesterday, and I noticed > that all my VMs show the network interfaces unplugged and disks inactive > (despite the VMs being up and running just fine). This includes the > hosted engine. > > I had not rebooted VMs after upgrading, so I tried powering one off and > on; it would not start until I manually activated the disk. > > I haven't seen a problem like this before (although it usually means > that I did something wrong :) ) - what should I look at? Hi, you may have hit this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1542117 it could affect all VMs started with Engine <= 4.1 and later imported in 4.2, or Hosted Engine. If your VM was first created under oVirt 4.2, please file a new bug. HTH, -- Francesco Romani Senior SW Eng., Virtualization R&D Red Hat IRC: fromani github: @fromanirh From marcoc at prismatelecomtesting.com Mon Feb 26 16:49:02 2018 From: marcoc at prismatelecomtesting.com (Marco Lorenzo Crociani) Date: Mon, 26 Feb 2018 17:49:02 +0100 Subject: [ovirt-users] ovirt 4.1 - skylake - no avx512 support in virtual machines Message-ID: Hi, I can't access avx512* instruction set from virtual machines. I have made a one server compute cluster to test new hardware: oVirt 4.1.9 CentOS 7 Cluster CPU Type: Intel Skylake Family Compatibility Version: 4.1 HOST: CPU Model Name:Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz Family: Family CPU Type: Intel Skylake Family Virtual Machine settings A) VM: Custom CPU Type: Use cluster default(Intel Skylake Family) General Tab shows: Guest CPU Type: Skylake-Client avx512: NO B) VM: Custom CPU Type: Skylake-Client General Tab shows: Guest CPU Type: Skylake-Client avx512: NO C) VM: Custom CPU Type: Use cluster default(Intel Skylake Family) [grey - cannot modify] Migration mode: Do not allow migration Pass-Through Host CPU General Tab shows: Guest CPU Type: Skylake-Client avx512: YES ( cat /proc/cpuinfo |grep avx512: avx512f avx512dq avx512cd avx512bw avx512vl ) Using pass-through host cpu (disabling vm migration) is the only way to access avx512 in a VM, is it a bug or am I missing something? Regards, -- Marco Crociani Prisma Telecom Testing S.r.l. via Petrocchi, 4 20127 MILANO ITALY Phone: +39 02 26113507 Fax: +39 02 26113597 e-mail: marcoc at prismatelecomtesting.com web: http://www.prismatelecomtesting.com From Alessandro.DeSalvo at roma1.infn.it Mon Feb 26 17:15:38 2018 From: Alessandro.DeSalvo at roma1.infn.it (Alessandro De Salvo) Date: Mon, 26 Feb 2018 18:15:38 +0100 Subject: [ovirt-users] Hosted Engine VM not imported In-Reply-To: <155ec691-445f-ec0b-9083-7236dd868b9f@roma1.infn.it> References: <155ec691-445f-ec0b-9083-7236dd868b9f@roma1.infn.it> Message-ID: Hi, after checking the engine.log I see a bunch of error like this too: 2018-02-26 03:22:06,806+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] HostName = atlas-svc-18 2018-02-26 03:22:06,806+01 ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Command 'GetVolumeInfoVDSCommand(HostName = atlas-svc-18, GetVolumeInfoVDSCommandParameters:{hostId='b18c40d8-7932-4b5d-995e-8ebc5ab2e3e2', storagePoolId='00000001-0001-0001-0001-000000000056', storageDomainId='f02d7d5d-1459-48b8-bf27-4225cdfdce23', imageGroupId='c815ec3f-6e31-4b08-81be-e515e803edce', imageId='c815ec3f-6e31-4b08-81be-e515e803edce'})' execution failed: VDSGenericException: VDSErrorException: Failed to GetVolumeInfoVDS, error = Image path does not exist or cannot be accessed/created: (u'/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31-4b08-81be-e515e803edce',), code = 254 2018-02-26 03:22:06,806+01 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.GetVolumeInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] FINISH, GetVolumeInfoVDSCommand, log id: 38adaef0 2018-02-26 03:22:06,806+01 ERROR [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Failed to get the volume information, marking as FAILED 2018-02-26 03:22:06,806+01 INFO [org.ovirt.engine.core.vdsbroker.irsbroker.GetImageInfoVDSCommand] (EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] FINISH, GetImageInfoVDSCommand, log id: 3ad29b91 2018-02-26 03:22:06,806+01 WARN [org.ovirt.engine.core.bll.exportimport.ImportVmCommand] (EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Validation of action 'ImportVm' failed for user SYSTEM. Reasons: VAR__ACTION__IMPORT,VAR__TYPE__VM,ACTION_TYPE_FAILED_VM_IMAGE_DOES_NOT_EXIST 2018-02-26 03:22:06,807+01 INFO [org.ovirt.engine.core.bll.exportimport.ImportVmCommand] (EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Lock freed to object 'EngineLock:{exclusiveLocks='[HostedEngine=VM_NAME, 235b91ce-b6d8-44c6-ac26-791ac3946727=VM]', sharedLocks='[235b91ce-b6d8-44c6-ac26-791ac3946727=REMOTE_VM]'}' 2018-02-26 03:22:06,807+01 ERROR [org.ovirt.engine.core.bll.HostedEngineImporter] (EE-ManagedThreadFactory-engine-Thread-97153) [796a8bc5] Failed importing the Hosted Engine VM Any help? Thanks, ? ??? Alessandro Il 24/02/18 14:32, Alessandro De Salvo ha scritto: > Hi, > > I have just migrated my dev cluster to the latest master, reinstalling > the engine VM and reimporting from a previous backup. I'm trying with > 4.3.0-0.0.master.20180222192611.git01e6ace.el7.centos > > I had a few problems: > > - the documentation seems to be outdated, and I just find by searching > the archives that it's needed to add the two (undocumented) options > --he-remove-storage-vm --he-remove-hosts > > - despite the fact I selected "No" to running the engine-setup command > in the VM (the ovirt appliance), the engine-setup is executed when > running hosted-engine --deploy, and as a result the procedure does not > stop allowing to reload the db backup. The only way I found was to put > the hosted-engine in global maintenance mode, stop the ovirt-engine, > do an engine-cleanup and reload the db, then it's possible to add the > first host in the GUI, but must be done manually > > - after it's all done, I can see the hosted_storage is imported, but > the HostedEngine is not imported, and in the Events I see messages > like this: > > VDSM atlas-svc-18 command GetVolumeInfoVDS failed: Image path does not > exist or cannot be accessed/created: > (u'/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31-4b08-81be-e515e803edce',) > > ?? the path here is clearly wrong, it should be > /rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/b7bc6468-438c-47e7-b7a4-7ed06b786da0/c815ec3f-6e31-4b08-81be-e515e803edce, > and I see the hosted_engine.conf in the shared storage has it > correctly set as vm_disk_id=b7bc6468-438c-47e7-b7a4-7ed06b786da0. > > > Any hint on what is not allowing the HostedEngine to be imported? I > didn't find a way to add other hosted engine nodes if the HE VM is not > imported in the cluster, like we were used in the past with the CLI > using hosted-engine --deploy on multiple hosts. > > Thanks for any help, > > > ??? Alessandro > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users From stirabos at redhat.com Mon Feb 26 17:17:52 2018 From: stirabos at redhat.com (Simone Tiraboschi) Date: Mon, 26 Feb 2018 18:17:52 +0100 Subject: [ovirt-users] Hosted Engine VM not imported In-Reply-To: <155ec691-445f-ec0b-9083-7236dd868b9f@roma1.infn.it> References: <155ec691-445f-ec0b-9083-7236dd868b9f@roma1.infn.it> Message-ID: On Sat, Feb 24, 2018 at 2:32 PM, Alessandro De Salvo < Alessandro.DeSalvo at roma1.infn.it> wrote: > Hi, > > I have just migrated my dev cluster to the latest master, reinstalling the > engine VM and reimporting from a previous backup. I'm trying with > 4.3.0-0.0.master.20180222192611.git01e6ace.el7.centos > > I had a few problems: > > - the documentation seems to be outdated, and I just find by searching the > archives that it's needed to add the two (undocumented) options > --he-remove-storage-vm --he-remove-hosts > > - despite the fact I selected "No" to running the engine-setup command in > the VM (the ovirt appliance), the engine-setup is executed when running > hosted-engine --deploy, and as a result the procedure does not stop > allowing to reload the db backup. The only way I found was to put the > hosted-engine in global maintenance mode, stop the ovirt-engine, do an > engine-cleanup and reload the db, then it's possible to add the first host > in the GUI, but must be done manually > > - after it's all done, I can see the hosted_storage is imported, but the > HostedEngine is not imported, and in the Events I see messages like this: > > VDSM atlas-svc-18 command GetVolumeInfoVDS failed: Image path does not > exist or cannot be accessed/created: (u'/rhev/data-center/mnt/glust > erSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/ > f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31- > 4b08-81be-e515e803edce',) > > the path here is clearly wrong, it should be > /rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn. > it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/ > images/b7bc6468-438c-47e7-b7a4-7ed06b786da0/c815ec3f-6e31-4b08-81be-e515e803edce, > and I see the hosted_engine.conf in the shared storage has it correctly set > as vm_disk_id=b7bc6468-438c-47e7-b7a4-7ed06b786da0. > > > Any hint on what is not allowing the HostedEngine to be imported? I didn't > find a way to add other hosted engine nodes if the HE VM is not imported in > the cluster, like we were used in the past with the CLI using hosted-engine > --deploy on multiple hosts. > Ciao Alessandro, with 4.2.1 we introduced a new deployment flow for hosted-engine based on ansible. In this new flow we run a local VM with a running engine and we use that engine to create a storage domain and a VM there. At the end we shutdown the locally running engine and we move it's disk over the disk of the VM created by the engine on the shared storage. At this point we don't need anymore the autoimport process since the engine migrated there already contains the engine VM and its storage domain. We have an RFE, for this new flow, to add a mechanism to inject an existing engine backup to be automatically restored before executing engine-setup for migration/disaster-recovery scenarios. Unfortunately it's still not ready but we have an hook mechanism to have hosted-engine-setup executing custom ansible tasks before running engine setup; we have an example in /usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_before_engine_setup/enginevm_before_engine_setup.yml.example Otherwise the old flow is still there, you have just to add --noansible and everything should work as in the past. > > Thanks for any help, > > > Alessandro > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alessandro.DeSalvo at roma1.infn.it Mon Feb 26 19:09:35 2018 From: Alessandro.DeSalvo at roma1.infn.it (Alessandro De Salvo) Date: Mon, 26 Feb 2018 20:09:35 +0100 Subject: [ovirt-users] Hosted Engine VM not imported In-Reply-To: References: <155ec691-445f-ec0b-9083-7236dd868b9f@roma1.infn.it> Message-ID: <3d7e3b84-6339-c301-b00e-95a7c6708813@roma1.infn.it> Ciao Simone, many thanks. So, how are we supposed to use those hooks? Should we just create a file /usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_before_engine_setup/enginevm_before_engine_setup.yml with the instructions to restore? Do you have an example for doing that? For the moment I think I'll stick to the old procedure by calling --noansible, as you suggest. I think the documenttation should be updated anyways, at least to add the --he-remove-storage-vm and --he-remove-hosts options, as well as the new procedure and the override with --noansible. Also, wouldn't it be safer to stick to the old procedure until the new one is fully operational? Or maybe at least a warning to the user, otherwise no one will ever be able to restorage a db and have it all functional with the default options. Thanks, ??? Alessandro Il 26/02/18 18:17, Simone Tiraboschi ha scritto: > > > On Sat, Feb 24, 2018 at 2:32 PM, Alessandro De Salvo > > wrote: > > Hi, > > I have just migrated my dev cluster to the latest master, > reinstalling the engine VM and reimporting from a previous backup. > I'm trying with 4.3.0-0.0.master.20180222192611.git01e6ace.el7.centos > > I had a few problems: > > - the documentation seems to be outdated, and I just find by > searching the archives that it's needed to add the two > (undocumented) options --he-remove-storage-vm --he-remove-hosts > > - despite the fact I selected "No" to running the engine-setup > command in the VM (the ovirt appliance), the engine-setup is > executed when running hosted-engine --deploy, and as a result the > procedure does not stop allowing to reload the db backup. The only > way I found was to put the hosted-engine in global maintenance > mode, stop the ovirt-engine, do an engine-cleanup and reload the > db, then it's possible to add the first host in the GUI, but must > be done manually > > - after it's all done, I can see the hosted_storage is imported, > but the HostedEngine is not imported, and in the Events I see > messages like this: > > VDSM atlas-svc-18 command GetVolumeInfoVDS failed: Image path does > not exist or cannot be accessed/created: > (u'/rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/c815ec3f-6e31-4b08-81be-e515e803edce',) > > ?? the path here is clearly wrong, it should be > /rhev/data-center/mnt/glusterSD/atlas-fsserv-07.roma1.infn.it:_atlas-engine-02/f02d7d5d-1459-48b8-bf27-4225cdfdce23/images/b7bc6468-438c-47e7-b7a4-7ed06b786da0/c815ec3f-6e31-4b08-81be-e515e803edce, > and I see the hosted_engine.conf in the shared storage has it > correctly set as vm_disk_id=b7bc6468-438c-47e7-b7a4-7ed06b786da0. > > > Any hint on what is not allowing the HostedEngine to be imported? > I didn't find a way to add other hosted engine nodes if the HE VM > is not imported in the cluster, like we were used in the past with > the CLI using hosted-engine --deploy on multiple hosts. > > > Ciao Alessandro, > with 4.2.1 we introduced a new deployment flow for hosted-engine based > on ansible. > In this new flow we run a local VM with a running engine and we use > that engine to create a storage domain and a VM there. > At the end we shutdown the locally running engine and we move it's > disk over the disk of the VM created by the engine on the shared > storage. At this point we don't need anymore the autoimport process > since the engine migrated there already contains the engine VM and its > storage domain. > > We have an RFE, for this new flow, to add a mechanism to inject an > existing engine backup to be automatically restored before executing > engine-setup for migration/disaster-recovery scenarios. > Unfortunately it's still not ready but we have an hook mechanism to > have hosted-engine-setup executing custom ansible tasks before running > engine setup; we have an example > in?/usr/share/ovirt-hosted-engine-setup/ansible/hooks/enginevm_before_engine_setup/enginevm_before_engine_setup.yml.example > > Otherwise the old flow is still there, you have just to add > --noansible and everything should work as in the past. > > > Thanks for any help, > > > ??? Alessandro > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bryan.Sockel at mdaemon.com Mon Feb 26 19:30:07 2018 From: Bryan.Sockel at mdaemon.com (Bryan Sockel) Date: Mon, 26 Feb 2018 13:30:07 -0600 Subject: [ovirt-users] VM Migrations Message-ID: Hi, I am having an issue migrating all vm's based on a specific template. The template was created in a previous ovirt environment (4.1), and all VM's deployed from this template experience the same issue. I would like to find a resolution to both the template and vm's that are already deployed from this template. The VM in question is VDI-Bryan and the migration starts around 12:25. I have attached the engine.log and the vdsm.log file from the destination server. Thanks Bryan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: engine.log URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: vdsm-target.log URL: From fabrice.soler at ac-guadeloupe.fr Mon Feb 26 21:09:02 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Mon, 26 Feb 2018 17:09:02 -0400 Subject: [ovirt-users] Start VM automatically Message-ID: <167789eb-4035-a5dd-74a7-80aa41934072@ac-guadeloupe.fr> Hello, My node (IP ovirtmgmt) is behind a routeur that is running on the hypervisor (the node itself). So, I need that the VM (routeur) start automatically after the node start. The ovirt engine is running on another infrastructure and the version is 4.2.0. The node is also in this version. Is there a solution ? Sincerely, -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From fabrice.soler at ac-guadeloupe.fr Mon Feb 26 23:33:51 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Mon, 26 Feb 2018 19:33:51 -0400 Subject: [ovirt-users] Start VM automatically In-Reply-To: <1fad025f-0898-6a63-6366-1d9aaf02d0a0@andrewswireless.net> References: <167789eb-4035-a5dd-74a7-80aa41934072@ac-guadeloupe.fr> <1fad025f-0898-6a63-6366-1d9aaf02d0a0@andrewswireless.net> Message-ID: Hi Hanson, Thank you for your answer, but the routeur is a VM. So I need that this VM start without the engine. It has to start when the node start (after a power failure). Only when the routeur will be up, I will be able to manage the node and the VMs. Do you think there a solution ? Sincerely, Fabrice Le 26/02/2018 ? 18:18, Hanson Turner a ?crit?: > > Hi Fabrice, > > If there's an issue with the hypervisor, the VM should pause. In the > highly available section, (edit the advanced options on the vm) you > can set the resume options.... restart/resume/stay off > > The engine needs to be able to see + manage the node. You'll have to > take care of the networking/port forwarding/vpn/vlan etc to make sure > the engine can control the node. > > Once the node's in control, the engine can restore the VM when it > knows the node is good. > > Thanks, > > Hanson > > > On 02/26/2018 04:09 PM, Fabrice SOLER wrote: >> >> Hello, >> >> My node (IP ovirtmgmt) is behind a routeur that is running on the >> hypervisor (the node itself). >> >> So, I need that the VM (routeur) start automatically after the node >> start. >> >> The ovirt engine is running on another infrastructure and the version >> is 4.2.0. The node is also in this version. >> >> Is there a solution ? >> >> Sincerely, >> >> -- >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From ccox at endlessnow.com Mon Feb 26 23:42:19 2018 From: ccox at endlessnow.com (Christopher Cox) Date: Mon, 26 Feb 2018 17:42:19 -0600 Subject: [ovirt-users] Start VM automatically In-Reply-To: References: <167789eb-4035-a5dd-74a7-80aa41934072@ac-guadeloupe.fr> <1fad025f-0898-6a63-6366-1d9aaf02d0a0@andrewswireless.net> Message-ID: <0e44dd3d-90b6-9f87-d305-68cf2f68d185@endlessnow.com> On 02/26/2018 05:33 PM, Fabrice SOLER wrote: > Hi Hanson, > > Thank you for your answer, but the routeur is a VM. > > So I need that this VM start without the engine. It has to start when > the node start (after a power failure). > > Only when the routeur will be up, I will be able to manage the node and > the VMs. > > Do you think there a solution ? Obviously this a bit more "chicken-and-egg" than running a hosted engine. Even if there was a "start automatically" sort of thing, this is still going to be fraught with potential issues, because almost anything could happen to prevent the "router vm" from starting. I think you've created a fairly high risk scenario. In other words, even if you got this working, I wouldn't trust it. Just my opinion. At a minimum I'd pull the "router" out of the VM stack. Now, you could have a separate hypervisor cluster stack for the router, just don't put it's mgmt engine (separate engine req'd) behind the router :-) > > Sincerely, > > Fabrice > > > Le 26/02/2018 ? 18:18, Hanson Turner a ?crit?: >> >> Hi Fabrice, >> >> If there's an issue with the hypervisor, the VM should pause. In the >> highly available section, (edit the advanced options on the vm) you >> can set the resume options.... restart/resume/stay off >> >> The engine needs to be able to see + manage the node. You'll have to >> take care of the networking/port forwarding/vpn/vlan etc to make sure >> the engine can control the node. >> >> Once the node's in control, the engine can restore the VM when it >> knows the node is good. >> >> Thanks, >> >> Hanson >> >> >> On 02/26/2018 04:09 PM, Fabrice SOLER wrote: >>> >>> Hello, >>> >>> My node (IP ovirtmgmt) is behind a routeur that is running on the >>> hypervisor (the node itself). >>> >>> So, I need that the VM (routeur) start automatically after the node >>> start. >>> >>> The ovirt engine is running on another infrastructure and the version >>> is 4.2.0. The node is also in this version. >>> >>> Is there a solution ? >>> >>> Sincerely, >>> >>> -- >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >> > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From geoffrsweet at gmail.com Tue Feb 27 01:04:47 2018 From: geoffrsweet at gmail.com (Geoff Sweet) Date: Mon, 26 Feb 2018 17:04:47 -0800 Subject: [ovirt-users] API endpoint for a VM to fetch metadata about itself In-Reply-To: <1e774133-bc56-544f-cf49-71620571a128@redhat.com> References: <1e774133-bc56-544f-cf49-71620571a128@redhat.com> Message-ID: OK, that's a great place for me to start. However the problem is that all my post-install tooling is now running on a VM that knows nothing about itself (having been installed via pxe and kickstart) like it's {vm_id}. Can the API be used to query for a VM and it's attributes based on something like a MAC address or the IP itself? -Geoff On Sun, Feb 25, 2018 at 11:05 PM, Ondra Machacek wrote: > We don't have any such resource. We have those information in different > places of the API. For example to find the information about devices of > the VM, like network device information (IP address, MAC, etc), you can > query: > > /ovirt-engine/api/vms/{vm_id}/reporteddevices > > The FQDN is listed right in the basic information of the VM quering the > VM itself: > > /ovirt-engine/api/vms/{vm_id} > > You can find all the information about specific attributes returned by > the API here in the documentation: > > http://ovirt.github.io/ovirt-engine-api-model/4.2/#types/vm > > On 02/25/2018 03:13 AM, Geoff Sweet wrote: > >> Is there an API endpoint that VM's can query to discover it's oVirt >> metadata? Something similar to AWS's http://169.254.169.254/latest/ >> meta-data/ query in EC2? I'm >> trying to stitch a lot of automation workflow together and so far I have >> had great luck with oVirt. But the next small hurdle is to figure out how >> all the post-install setup stuff can figure out who the VM is so it can the >> appropriate configurations. >> >> Thanks! >> -Geoff >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From msivak at redhat.com Tue Feb 27 08:10:26 2018 From: msivak at redhat.com (Martin Sivak) Date: Tue, 27 Feb 2018 09:10:26 +0100 Subject: [ovirt-users] Start VM automatically In-Reply-To: <167789eb-4035-a5dd-74a7-80aa41934072@ac-guadeloupe.fr> References: <167789eb-4035-a5dd-74a7-80aa41934072@ac-guadeloupe.fr> Message-ID: Hi, we are considering the feature and its many angles: - starting without the management using local storage ( https://bugzilla.redhat.com/show_bug.cgi?id=1166657) - starting without the management with shared storage ( https://bugzilla.redhat.com/show_bug.cgi?id=817363) - starting the VM via the management running as Hosted Engine ( https://bugzilla.redhat.com/show_bug.cgi?id=1325468) All three cases have their pain points eg: which host should start the VM and how do you protect against split brain? If you would be so kind, please describe your use case to the relevant RFE bug so we can consider it when planning the feature. And stand assured that we are thinking about how to implement this properly. Best regards -- Martin Sivak SLA / oVirt On Mon, Feb 26, 2018 at 10:09 PM, Fabrice SOLER < fabrice.soler at ac-guadeloupe.fr> wrote: > Hello, > > My node (IP ovirtmgmt) is behind a routeur that is running on the > hypervisor (the node itself). > > So, I need that the VM (routeur) start automatically after the node start. > > The ovirt engine is running on another infrastructure and the version is > 4.2.0. The node is also in this version. > > Is there a solution ? > > Sincerely, > -- > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From sbonazzo at redhat.com Tue Feb 27 08:37:24 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Tue, 27 Feb 2018 09:37:24 +0100 Subject: [ovirt-users] ovirt 4.1 - skylake - no avx512 support in virtual machines In-Reply-To: References: Message-ID: 2018-02-26 17:49 GMT+01:00 Marco Lorenzo Crociani < marcoc at prismatelecomtesting.com>: > Hi, > I can't access avx512* instruction set from virtual machines. > Added some relevant people. > I have made a one server compute cluster to test new hardware: > > oVirt 4.1.9 > CentOS 7 > Cluster CPU Type: Intel Skylake Family > Compatibility Version: 4.1 > > HOST: > CPU Model Name:Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz > Family: Family > CPU Type: Intel Skylake Family > > Virtual Machine settings > A) VM: > Custom CPU Type: Use cluster default(Intel Skylake Family) > General Tab shows: Guest CPU Type: Skylake-Client > > avx512: NO > > B) VM: > Custom CPU Type: Skylake-Client > General Tab shows: Guest CPU Type: Skylake-Client > > avx512: NO > > C) VM: > Custom CPU Type: Use cluster default(Intel Skylake Family) [grey - cannot > modify] > Migration mode: Do not allow migration > Pass-Through Host CPU > General Tab shows: Guest CPU Type: Skylake-Client > > avx512: YES ( cat /proc/cpuinfo |grep avx512: avx512f avx512dq avx512cd > avx512bw avx512vl ) > > Using pass-through host cpu (disabling vm migration) is the only way to > access avx512 in a VM, is it a bug or am I missing something? > > Regards, > > -- > Marco Crociani > Prisma Telecom Testing S.r.l. > via Petrocchi, 4 20127 MILANO ITALY > Phone: +39 02 26113507 > Fax: +39 02 26113597 > e-mail: marcoc at prismatelecomtesting.com > web: http://www.prismatelecomtesting.com > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbonzini at redhat.com Tue Feb 27 08:44:16 2018 From: pbonzini at redhat.com (Paolo Bonzini) Date: Tue, 27 Feb 2018 09:44:16 +0100 Subject: [ovirt-users] ovirt 4.1 - skylake - no avx512 support in virtual machines In-Reply-To: References: Message-ID: <71aff973-8971-e274-3d8e-0b6fdbd03b2d@redhat.com> On 27/02/2018 09:37, Sandro Bonazzola wrote: > > Virtual Machine settings > A) VM: > Custom CPU Type: Use cluster default(Intel Skylake Family) > General Tab shows: Guest CPU Type: Skylake-Client > > avx512: NO > > B) VM: > Custom CPU Type: Skylake-Client > General Tab shows: Guest CPU Type: Skylake-Client > > avx512: NO > > C) VM: > Custom CPU Type: Use cluster default(Intel Skylake Family) [grey - > cannot modify] > Migration mode: Do not allow migration > Pass-Through Host CPU > General Tab shows: Guest CPU Type: Skylake-Client > > avx512: YES? ?( cat /proc/cpuinfo? |grep avx512: avx512f avx512dq > avx512cd avx512bw avx512vl ) > > Using pass-through host cpu (disabling vm migration) is the only way to > access avx512 in a VM, is it a bug or am I missing something? Skylake-Client does _not_ have AVX512 (I tried now on a Kaby Lake Core i7 laptop). Only Skylake-Server has it and it will be in RHEL 7.5. Thanks, Paolo From arsene.gschwind at unibas.ch Tue Feb 27 09:04:23 2018 From: arsene.gschwind at unibas.ch (=?UTF-8?Q?Ars=c3=a8ne_Gschwind?=) Date: Tue, 27 Feb 2018 10:04:23 +0100 Subject: [ovirt-users] After upgrade to 4.2 some VM won't start In-Reply-To: References: Message-ID: <46e3d524-1bcc-df98-ebb4-5295c11bcc18@unibas.ch> Hi, I would like investigate what went wrong during the Cluster Compatibility update on running VMs, for sure the workaround by creating new VM and attaching disk works great but i think it would be interesting to know what went wrong. I've tried to find a way to create a dump of the VM config to be able to make a diff between old and new one to see what is different but without any luck so far... Any idea how to create such a dump? Thanks for any help. rgds, Arsene On 02/24/2018 09:03 AM, Ars?ne Gschwind wrote: > > When creating an identical VM and attaching the one disk it will start > and run perfectly. It seems that during the Cluster Compatibility > Update something doesn't work right on running VM, this only happens > on running VMs and I could reproduce it. > > Is there a way to do some kind of diff between the new and the old VM > settings to find out what may be different? > > Thanks, > Arsene > > > On 02/23/2018 08:14 PM, Ars?ne Gschwind wrote: >> >> Hi, >> >> After upgrading cluster compatibility to 4.2 some VM won't start and >> I'm unable to figured out why, it throws a java exception. >> >> I've attached the engine log. >> >> Thanks for any help/hint. >> >> rgds, >> Arsene >> >> -- >> >> *Ars?ne Gschwind* >> Fa. Sapify AG im Auftrag der Universit?t Basel >> IT Services >> Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland >> Tel. +41 79 449 25 63? | http://its.unibas.ch >> ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 >> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users > > -- > > *Ars?ne Gschwind* > Fa. Sapify AG im Auftrag der Universit?t Basel > IT Services > Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland > Tel. +41 79 449 25 63? | http://its.unibas.ch > ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 > > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -- *Ars?ne Gschwind* Fa. Sapify AG im Auftrag der Universit?t Basel IT Services Klingelbergstr. 70?|? CH-4056 Basel? |? Switzerland Tel. +41 79 449 25 63? | http://its.unibas.ch ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From M.Vrgotic at activevideo.com Tue Feb 27 09:45:33 2018 From: M.Vrgotic at activevideo.com (Vrgotic, Marko) Date: Tue, 27 Feb 2018 09:45:33 +0000 Subject: [ovirt-users] How to protect SHE VM from being deleted in following setup In-Reply-To: <5FBC2B5F-E3A2-4B8A-95A2-06F6938DEBAC@ictv.com> References: <5FBC2B5F-E3A2-4B8A-95A2-06F6938DEBAC@ictv.com> Message-ID: <12506FD3-D3F4-4A8A-82D1-69271BBDF476@ictv.com> Dear Michal, Your reply pushed me into learning more about how permission sets apply and I have managed to protect the SHE. Thank you, feeling better already. Kind regards -- Marko Vrgotic From: "Vrgotic, Marko" Date: Monday, 19 February 2018 at 09:49 To: Michal Skrivanek Cc: users Subject: Re: [ovirt-users] How to protect SHE VM from being deleted in following setup Hi Michal, This is exactly what I would expect to achieve by default, if creating regular user. However, these users are allowed Admin access, and therefore, I have created ?very? limited accounts, so that they can Create,Manipulate,Delete VMs, but I do not see how and where I can set that this is allowed only for VMs they own. Here are the screenshots of the Role ?AWS VM Operator? I created for them: [cid:image001.png at 01D3A966.FB0232E0] [cid:image002.png at 01D3A966.FB0232E0] [cid:image003.png at 01D3A966.FB0232E0] Following one actually contains what they are allowed to: [cid:image004.png at 01D3A966.FB0232E0] What am I missing? Kindly awaiting your reply. Marko From: Michal Skrivanek Date: Sunday, 18 February 2018 at 15:23 To: "Vrgotic, Marko" Cc: users Subject: Re: [ovirt-users] How to protect SHE VM from being deleted in following setup Why do you give them permissions to HE VM? You should be able to give them creation, but not let them delete VMs they do not own -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 53502 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 41790 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 13485 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 26560 bytes Desc: image004.png URL: From nicolas at ecarnot.net Tue Feb 27 10:29:41 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Tue, 27 Feb 2018 11:29:41 +0100 Subject: [ovirt-users] Hosts firewall custom setup In-Reply-To: References: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> <13a2e235-8610-8ec2-1c3d-5b82ed5351f3@ecarnot.net> Message-ID: <08a7c586-0b61-42aa-9784-2d2e8b4ea7d8@ecarnot.net> Le 26/02/2018 ? 15:00, Yedidyah Bar David a ?crit?: >> But how do we add custom rules in case of firewalld type? > > Please see: https://ovirt.org/blog/2017/12/host-deploy-customization/ Hello Didi and al, - I followed the advices found in this blog page, I created the exact same filename with the adequate content. - I've setup the cluster type to firewalld - I restarted ovirt-engine - I reinstalled a host I see no usage of this Ansible yml file. I see the creation of an ansible deploy log file for my host, and I see the usual firewall ports being opened, but I see nowhere any usage of the /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml file. - I added the debug msg part in the ansible recipe, but to no avail. - Huge grepping through the /var/log of the engine shows no calls of this script. Thus, I see no effect on ports of the host's firewalld config. What should I look at now? Thank you. -- Nicolas ECARNOT From alexeynikolaev.post at yandex.ru Tue Feb 27 11:22:37 2018 From: alexeynikolaev.post at yandex.ru (=?utf-8?B?0J3QuNC60L7Qu9Cw0LXQsiDQkNC70LXQutGB0LXQuQ==?=) Date: Tue, 27 Feb 2018 14:22:37 +0300 Subject: [ovirt-users] ovirt-ansible-modules vs ovirt 3.6 Message-ID: <896841519730557@web4j.yandex.ru> An HTML attachment was scrubbed... URL: From omachace at redhat.com Tue Feb 27 12:13:37 2018 From: omachace at redhat.com (Ondra Machacek) Date: Tue, 27 Feb 2018 13:13:37 +0100 Subject: [ovirt-users] API endpoint for a VM to fetch metadata about itself In-Reply-To: References: <1e774133-bc56-544f-cf49-71620571a128@redhat.com> Message-ID: Yep, you can search for a VM using many attributes, for example to search for a VM using IP address: https://fqdn/ovirt-engine/api/vms?search=ip=1.2.3.4 Here you have more search parameters you can use to find a VM: https://www.ovirt.org/documentation/admin-guide/appe-Using_Search_Bookmarks_and_Tags/#searching-for-virtual-machines On 02/27/2018 02:04 AM, Geoff Sweet wrote: > OK, that's a great place for me to start. However the problem is that > all my post-install tooling is now running on a VM that knows nothing > about itself (having been installed via pxe and kickstart) like it's > {vm_id}.? Can the API be used to query for a VM and it's attributes > based on something like a MAC address or the IP itself? > > -Geoff > > On Sun, Feb 25, 2018 at 11:05 PM, Ondra Machacek > wrote: > > We don't have any such resource. We have those information in different > ?places of the API. For example to find the information about > devices of > the VM, like network device information (IP address, MAC, etc), you can > query: > > ?/ovirt-engine/api/vms/{vm_id}/reporteddevices > > The FQDN is listed right in the basic information of the VM quering the > VM itself: > > ? /ovirt-engine/api/vms/{vm_id} > > You can find all the information about specific attributes returned by > the API here in the documentation: > > http://ovirt.github.io/ovirt-engine-api-model/4.2/#types/vm > > > On 02/25/2018 03:13 AM, Geoff Sweet wrote: > > Is there an API endpoint that VM's can query to discover it's > oVirt metadata? Something similar to AWS's > http://169.254.169.254/latest/meta-data/ > > > query in EC2? I'm > trying to stitch a lot of automation workflow together and so > far I have had great luck with oVirt. But the next small hurdle > is to figure out how all the post-install setup stuff can figure > out who the VM is so it can the appropriate configurations. > > Thanks! > -Geoff > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > > From junaid8756 at gmail.com Tue Feb 27 12:17:29 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Tue, 27 Feb 2018 17:17:29 +0500 Subject: [ovirt-users] Error installing ovirt node Message-ID: Dear All, I have Ovirt engine 4.2 and node version is 4.2. After installing node in in ovirt engine when i try to install node it gives following error 14:25:37,410+05 ERROR [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Host installation failed for host 'bd8d007a-be92-4075-bba9-6cbeb890a1e5', 'node_2': Command returned failure code 1 during SSH session 'root at 192.168.20.20' 2018-02-27 14:25:37,416+05 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-19) [52669850] START, SetVdsStatusVDSCommand(HostName = node_2, SetVdsStatusVDSCommandParameters:{hostId='bd8d007a-be92-4075-bba9-6cbeb890a1e5', status='InstallFailed', nonOperationalReason='NONE', stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 2b138e87 2018-02-27 14:25:37,423+05 INFO [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] (EE-ManagedThreadFactory-engine-Thread-19) [52669850] FINISH, SetVdsStatusVDSCommand, log id: 2b138e87 2018-02-27 14:25:37,429+05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (EE-ManagedThreadFactory-engine-Thread-19) [52669850] EVENT_ID: VDS_INSTALL_FAILED(505), Host node_2 installation failed. Command returned failure code 1 during SSH session 'root at 192.168.20.20'. 2018-02-27 14:25:37,433+05 INFO [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Lock freed to object 'EngineLock:{exclusiveLocks='[bd8d007a-be92-4075-bba9-6cbeb890a1e5=VDS]', sharedLocks=''}' I have attached log file for your reference Please help me out. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: engine log.log Type: application/octet-stream Size: 481181 bytes Desc: not available URL: From didi at redhat.com Tue Feb 27 12:31:47 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Tue, 27 Feb 2018 14:31:47 +0200 Subject: [ovirt-users] Error installing ovirt node In-Reply-To: References: Message-ID: On Tue, Feb 27, 2018 at 2:17 PM, Junaid Jadoon wrote: > Dear All, > I have Ovirt engine 4.2 and node version is 4.2. > > After installing node in in ovirt engine when i try to install node it gives > following error > 14:25:37,410+05 ERROR > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Host installation > failed for host 'bd8d007a-be92-4075-bba9-6cbeb890a1e5', 'node_2': Command > returned failure code 1 during SSH session 'root at 192.168.20.20' > 2018-02-27 14:25:37,416+05 INFO > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] START, > SetVdsStatusVDSCommand(HostName = node_2, > SetVdsStatusVDSCommandParameters:{hostId='bd8d007a-be92-4075-bba9-6cbeb890a1e5', > status='InstallFailed', nonOperationalReason='NONE', > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: 2b138e87 > 2018-02-27 14:25:37,423+05 INFO > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] FINISH, > SetVdsStatusVDSCommand, log id: 2b138e87 > 2018-02-27 14:25:37,429+05 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] EVENT_ID: > VDS_INSTALL_FAILED(505), Host node_2 installation failed. Command returned > failure code 1 during SSH session 'root at 192.168.20.20'. > 2018-02-27 14:25:37,433+05 INFO > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Lock freed to object > 'EngineLock:{exclusiveLocks='[bd8d007a-be92-4075-bba9-6cbeb890a1e5=VDS]', > sharedLocks=''}' > > > I have attached log file for your reference The relevant part is: 2018-02-27 12:54:56,909+05 ERROR [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] (VdsDeploy) [719d5f5d] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred during installation of Host uoi_node2: Yum Cannot queue package dmidecode: Cannot retrieve metalink for repository: ovirt-4.2-epel/x86_64. Please verify its path and try again. Please check/share yum repos on the host. What happens if you run there 'yum install dmidecode'? It might be a specific bad mirror, or a bad proxy etc. You can edit the repo file to point at another specific mirror instead of using mirrorlist. Best regards, -- Didi From junaid8756 at gmail.com Tue Feb 27 12:38:51 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Tue, 27 Feb 2018 17:38:51 +0500 Subject: [ovirt-users] Error installing ovirt node In-Reply-To: References: Message-ID: Thanks Yedidyah David for reply. Please confirm where check should the repo file either host engine or node server??? On Tue, Feb 27, 2018 at 5:31 PM, Yedidyah Bar David wrote: > On Tue, Feb 27, 2018 at 2:17 PM, Junaid Jadoon > wrote: > > Dear All, > > I have Ovirt engine 4.2 and node version is 4.2. > > > > After installing node in in ovirt engine when i try to install node it > gives > > following error > > 14:25:37,410+05 ERROR > > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Host installation > > failed for host 'bd8d007a-be92-4075-bba9-6cbeb890a1e5', 'node_2': > Command > > returned failure code 1 during SSH session 'root at 192.168.20.20' > > 2018-02-27 14:25:37,416+05 INFO > > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] START, > > SetVdsStatusVDSCommand(HostName = node_2, > > SetVdsStatusVDSCommandParameters:{hostId='bd8d007a-be92- > 4075-bba9-6cbeb890a1e5', > > status='InstallFailed', nonOperationalReason='NONE', > > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: > 2b138e87 > > 2018-02-27 14:25:37,423+05 INFO > > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] FINISH, > > SetVdsStatusVDSCommand, log id: 2b138e87 > > 2018-02-27 14:25:37,429+05 ERROR > > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] EVENT_ID: > > VDS_INSTALL_FAILED(505), Host node_2 installation failed. Command > returned > > failure code 1 during SSH session 'root at 192.168.20.20'. > > 2018-02-27 14:25:37,433+05 INFO > > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Lock freed to > object > > 'EngineLock:{exclusiveLocks='[bd8d007a-be92-4075-bba9- > 6cbeb890a1e5=VDS]', > > sharedLocks=''}' > > > > > > I have attached log file for your reference > > The relevant part is: > > 2018-02-27 12:54:56,909+05 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (VdsDeploy) [719d5f5d] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), > An error has occurred during installation of Host uoi_node2: Yum > Cannot queue package dmidecode: Cannot retrieve metalink for > repository: ovirt-4.2-epel/x86_64. Please verify its path and try > again. > > Please check/share yum repos on the host. What happens if you run > there 'yum install dmidecode'? > It might be a specific bad mirror, or a bad proxy etc. You can edit > the repo file to point at > another specific mirror instead of using mirrorlist. > > Best regards, > -- > Didi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Tue Feb 27 12:46:27 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Tue, 27 Feb 2018 14:46:27 +0200 Subject: [ovirt-users] Error installing ovirt node In-Reply-To: References: Message-ID: On Tue, Feb 27, 2018 at 2:38 PM, Junaid Jadoon wrote: > Thanks Yedidyah David for reply. > > Please confirm where check should the repo file either host engine or node > server??? On the node, please. > > > > On Tue, Feb 27, 2018 at 5:31 PM, Yedidyah Bar David wrote: >> >> On Tue, Feb 27, 2018 at 2:17 PM, Junaid Jadoon >> wrote: >> > Dear All, >> > I have Ovirt engine 4.2 and node version is 4.2. >> > >> > After installing node in in ovirt engine when i try to install node it >> > gives >> > following error >> > 14:25:37,410+05 ERROR >> > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Host installation >> > failed for host 'bd8d007a-be92-4075-bba9-6cbeb890a1e5', 'node_2': >> > Command >> > returned failure code 1 during SSH session 'root at 192.168.20.20' >> > 2018-02-27 14:25:37,416+05 INFO >> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] START, >> > SetVdsStatusVDSCommand(HostName = node_2, >> > >> > SetVdsStatusVDSCommandParameters:{hostId='bd8d007a-be92-4075-bba9-6cbeb890a1e5', >> > status='InstallFailed', nonOperationalReason='NONE', >> > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: >> > 2b138e87 >> > 2018-02-27 14:25:37,423+05 INFO >> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] FINISH, >> > SetVdsStatusVDSCommand, log id: 2b138e87 >> > 2018-02-27 14:25:37,429+05 ERROR >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] EVENT_ID: >> > VDS_INSTALL_FAILED(505), Host node_2 installation failed. Command >> > returned >> > failure code 1 during SSH session 'root at 192.168.20.20'. >> > 2018-02-27 14:25:37,433+05 INFO >> > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Lock freed to >> > object >> > >> > 'EngineLock:{exclusiveLocks='[bd8d007a-be92-4075-bba9-6cbeb890a1e5=VDS]', >> > sharedLocks=''}' >> > >> > >> > I have attached log file for your reference >> >> The relevant part is: >> >> 2018-02-27 12:54:56,909+05 ERROR >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> (VdsDeploy) [719d5f5d] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), >> An error has occurred during installation of Host uoi_node2: Yum >> Cannot queue package dmidecode: Cannot retrieve metalink for >> repository: ovirt-4.2-epel/x86_64. Please verify its path and try >> again. >> >> Please check/share yum repos on the host. What happens if you run >> there 'yum install dmidecode'? >> It might be a specific bad mirror, or a bad proxy etc. You can edit >> the repo file to point at >> another specific mirror instead of using mirrorlist. >> >> Best regards, >> -- >> Didi > > -- Didi From junaid8756 at gmail.com Tue Feb 27 12:49:25 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Tue, 27 Feb 2018 17:49:25 +0500 Subject: [ovirt-users] Error installing ovirt node In-Reply-To: References: Message-ID: please guide which repo should i add on node server in order to resolve this issue???? can u please send repo list. On Tue, Feb 27, 2018 at 5:46 PM, Yedidyah Bar David wrote: > On Tue, Feb 27, 2018 at 2:38 PM, Junaid Jadoon > wrote: > > Thanks Yedidyah David for reply. > > > > Please confirm where check should the repo file either host engine or > node > > server??? > > On the node, please. > > > > > > > > > On Tue, Feb 27, 2018 at 5:31 PM, Yedidyah Bar David > wrote: > >> > >> On Tue, Feb 27, 2018 at 2:17 PM, Junaid Jadoon > >> wrote: > >> > Dear All, > >> > I have Ovirt engine 4.2 and node version is 4.2. > >> > > >> > After installing node in in ovirt engine when i try to install node it > >> > gives > >> > following error > >> > 14:25:37,410+05 ERROR > >> > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Host > installation > >> > failed for host 'bd8d007a-be92-4075-bba9-6cbeb890a1e5', 'node_2': > >> > Command > >> > returned failure code 1 during SSH session 'root at 192.168.20.20' > >> > 2018-02-27 14:25:37,416+05 INFO > >> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] START, > >> > SetVdsStatusVDSCommand(HostName = node_2, > >> > > >> > SetVdsStatusVDSCommandParameters:{hostId='bd8d007a-be92- > 4075-bba9-6cbeb890a1e5', > >> > status='InstallFailed', nonOperationalReason='NONE', > >> > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: > >> > 2b138e87 > >> > 2018-02-27 14:25:37,423+05 INFO > >> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] FINISH, > >> > SetVdsStatusVDSCommand, log id: 2b138e87 > >> > 2018-02-27 14:25:37,429+05 ERROR > >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling. > AuditLogDirector] > >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] EVENT_ID: > >> > VDS_INSTALL_FAILED(505), Host node_2 installation failed. Command > >> > returned > >> > failure code 1 during SSH session 'root at 192.168.20.20'. > >> > 2018-02-27 14:25:37,433+05 INFO > >> > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Lock freed to > >> > object > >> > > >> > 'EngineLock:{exclusiveLocks='[bd8d007a-be92-4075-bba9- > 6cbeb890a1e5=VDS]', > >> > sharedLocks=''}' > >> > > >> > > >> > I have attached log file for your reference > >> > >> The relevant part is: > >> > >> 2018-02-27 12:54:56,909+05 ERROR > >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > >> (VdsDeploy) [719d5f5d] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), > >> An error has occurred during installation of Host uoi_node2: Yum > >> Cannot queue package dmidecode: Cannot retrieve metalink for > >> repository: ovirt-4.2-epel/x86_64. Please verify its path and try > >> again. > >> > >> Please check/share yum repos on the host. What happens if you run > >> there 'yum install dmidecode'? > >> It might be a specific bad mirror, or a bad proxy etc. You can edit > >> the repo file to point at > >> another specific mirror instead of using mirrorlist. > >> > >> Best regards, > >> -- > >> Didi > > > > > > > > -- > Didi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Tue Feb 27 12:55:50 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Tue, 27 Feb 2018 14:55:50 +0200 Subject: [ovirt-users] Error installing ovirt node In-Reply-To: References: Message-ID: On Tue, Feb 27, 2018 at 2:49 PM, Junaid Jadoon wrote: > please guide which repo should i add on node server in order to resolve this > issue???? can u please send repo list. What's the output of each of these commands: rpm -qa | grep ovirt-release grep ovirt-4.2-epel /etc/yum.repos.d/* cat /etc/yum.repos.d/ovirt-*dependencies.repo yum install dmidecode Thanks, > > > > On Tue, Feb 27, 2018 at 5:46 PM, Yedidyah Bar David wrote: >> >> On Tue, Feb 27, 2018 at 2:38 PM, Junaid Jadoon >> wrote: >> > Thanks Yedidyah David for reply. >> > >> > Please confirm where check should the repo file either host engine or >> > node >> > server??? >> >> On the node, please. >> >> > >> > >> > >> > On Tue, Feb 27, 2018 at 5:31 PM, Yedidyah Bar David >> > wrote: >> >> >> >> On Tue, Feb 27, 2018 at 2:17 PM, Junaid Jadoon >> >> wrote: >> >> > Dear All, >> >> > I have Ovirt engine 4.2 and node version is 4.2. >> >> > >> >> > After installing node in in ovirt engine when i try to install node >> >> > it >> >> > gives >> >> > following error >> >> > 14:25:37,410+05 ERROR >> >> > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Host >> >> > installation >> >> > failed for host 'bd8d007a-be92-4075-bba9-6cbeb890a1e5', 'node_2': >> >> > Command >> >> > returned failure code 1 during SSH session 'root at 192.168.20.20' >> >> > 2018-02-27 14:25:37,416+05 INFO >> >> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] START, >> >> > SetVdsStatusVDSCommand(HostName = node_2, >> >> > >> >> > >> >> > SetVdsStatusVDSCommandParameters:{hostId='bd8d007a-be92-4075-bba9-6cbeb890a1e5', >> >> > status='InstallFailed', nonOperationalReason='NONE', >> >> > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: >> >> > 2b138e87 >> >> > 2018-02-27 14:25:37,423+05 INFO >> >> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] FINISH, >> >> > SetVdsStatusVDSCommand, log id: 2b138e87 >> >> > 2018-02-27 14:25:37,429+05 ERROR >> >> > >> >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] EVENT_ID: >> >> > VDS_INSTALL_FAILED(505), Host node_2 installation failed. Command >> >> > returned >> >> > failure code 1 during SSH session 'root at 192.168.20.20'. >> >> > 2018-02-27 14:25:37,433+05 INFO >> >> > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Lock freed to >> >> > object >> >> > >> >> > >> >> > 'EngineLock:{exclusiveLocks='[bd8d007a-be92-4075-bba9-6cbeb890a1e5=VDS]', >> >> > sharedLocks=''}' >> >> > >> >> > >> >> > I have attached log file for your reference >> >> >> >> The relevant part is: >> >> >> >> 2018-02-27 12:54:56,909+05 ERROR >> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] >> >> (VdsDeploy) [719d5f5d] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), >> >> An error has occurred during installation of Host uoi_node2: Yum >> >> Cannot queue package dmidecode: Cannot retrieve metalink for >> >> repository: ovirt-4.2-epel/x86_64. Please verify its path and try >> >> again. >> >> >> >> Please check/share yum repos on the host. What happens if you run >> >> there 'yum install dmidecode'? >> >> It might be a specific bad mirror, or a bad proxy etc. You can edit >> >> the repo file to point at >> >> another specific mirror instead of using mirrorlist. >> >> >> >> Best regards, >> >> -- >> >> Didi >> > >> > >> >> >> >> -- >> Didi > > -- Didi From omachace at redhat.com Tue Feb 27 13:07:40 2018 From: omachace at redhat.com (Ondra Machacek) Date: Tue, 27 Feb 2018 14:07:40 +0100 Subject: [ovirt-users] ovirt-ansible-modules vs ovirt 3.6 In-Reply-To: <896841519730557@web4j.yandex.ru> References: <896841519730557@web4j.yandex.ru> Message-ID: <21276d37-6d1c-2174-6944-8bb80baa6bfa@redhat.com> Hi, unfortunately no, ovirt-ansible-modules can be used only with oVirt >= 4.0. On 02/27/2018 12:22 PM, ???????? ??????? wrote: > Hi community! > Is it possible to use ovirt-ansible-modules with ovirt-engine 3.6 api? > I'm trying to obtain SSO token by ovirt_auth. And get error: > "The response content type 'text/html;charset=UTF-8' isn't the expected > JSON". > However, everything works fine with ovirt-engine 4.2 api. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > From omachace at redhat.com Tue Feb 27 13:15:37 2018 From: omachace at redhat.com (Ondra Machacek) Date: Tue, 27 Feb 2018 14:15:37 +0100 Subject: [ovirt-users] Hosts firewall custom setup In-Reply-To: <08a7c586-0b61-42aa-9784-2d2e8b4ea7d8@ecarnot.net> References: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> <13a2e235-8610-8ec2-1c3d-5b82ed5351f3@ecarnot.net> <08a7c586-0b61-42aa-9784-2d2e8b4ea7d8@ecarnot.net> Message-ID: On 02/27/2018 11:29 AM, Nicolas Ecarnot wrote: > Le 26/02/2018 ? 15:00, Yedidyah Bar David a ?crit?: >>> But how do we add custom rules in case of firewalld type? >> >> Please see: https://ovirt.org/blog/2017/12/host-deploy-customization/ > Hello Didi and al, > > - I followed the advices found in this blog page, I created the exact > same filename with the adequate content. > - I've setup the cluster type to firewalld > - I restarted ovirt-engine > - I reinstalled a host > > I see no usage of this Ansible yml file. > I see the creation of an ansible deploy log file for my host, and I see > the usual firewall ports being opened, but I see nowhere any usage of > the /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml file. > - I added the debug msg part in the ansible recipe, but to no avail. > - Huge grepping through the /var/log of the engine shows no calls of > this script. > > Thus, I see no effect on ports of the host's firewalld config. > > What should I look at now? It looks like you hit the following bug: https://bugzilla.redhat.com/show_bug.cgi?id=1549163 It will be fixed in 4.2.2 release. I believe you can meanwhile remove line: - oVirt-metrics from file: /usr/share/ovirt-engine/playbooks/roles/ovirt-host-deploy/meta/main.yml > > Thank you. > From sleviim at redhat.com Tue Feb 27 13:19:45 2018 From: sleviim at redhat.com (Shani Leviim) Date: Tue, 27 Feb 2018 15:19:45 +0200 Subject: [ovirt-users] Ghost Snapshot Disk In-Reply-To: <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> References: <2109773819.1728939.1519370725788.JavaMail.zimbra@cines.fr> <1126422114.1812662.1519629600204.JavaMail.zimbra@cines.fr> <1550150634.1827635.1519649308293.JavaMail.zimbra@cines.fr> <280580777.1830731.1519652234852.JavaMail.zimbra@cines.fr> <48154177.1832942.1519654691849.JavaMail.zimbra@cines.fr> Message-ID: Hi Lionel, Sorry for the delay in replying you. If it's possible from your side, syncing the data and destroying old disk sounds about right. In addition, it seems like you're having this bug: https://bugzilla.redhat.com/show_bug.cgi?id=1509629 And it was fixed for version 4.1.9. and above. *Regards,* *Shani Leviim* On Mon, Feb 26, 2018 at 4:18 PM, Lionel Caignec wrote: > Ok so i reply myself, > > Version is 4.1.7.6-1 > > I just delete manually a snapshot previously created. But this is an io > intensive vm, whit big disk (2,5To, and 5To). > > For the log, i cannot paste all my log on public list security reason, i > will send you full in private. > Here is an extract relevant to my error > engine.log-20180210:2018-02-09 23:00:03,200+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-312) [44402a8c-3196-43f0-ba33-307ea78e6f49] EVENT_ID: > USER_CREATE_SNAPSHOT(45), Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, > Job ID: 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom > ID: null, Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' > creation for VM 'zz_nil' was initiated by snap_user at internal. > engine.log-20180210:2018-02-09 23:01:06,578+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler5) [] EVENT_ID: USER_CREATE_SNAPSHOT_FINISHED_SUCCESS(68), > Correlation ID: 44402a8c-3196-43f0-ba33-307ea78e6f49, Job ID: > 030cd310-fec9-4a89-8c3f-7888504fe973, Call Stack: null, Custom ID: null, > Custom Event ID: -1, Message: Snapshot 'AUTO_7D_zz_nil_20180209_220002' > creation for VM 'zz_nil' has been completed. > engine.log-20180220:2018-02-19 17:01:23,800+01 INFO > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (default task-113) [] EVENT_ID: USER_REMOVE_SNAPSHOT(342), Correlation ID: > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de-3659ac0765da, > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Snapshot > 'AUTO_7D_zz_nil_20180209_220002' deletion for VM 'zz_nil' was initiated > by acaignec at ldap-cines-authz. > engine.log-20180221:2018-02-20 22:24:45,174+01 ERROR > [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector] > (DefaultQuartzScheduler6) [06a9efa4-1b80-4021-bf3e-41ecebe58a88] > EVENT_ID: USER_REMOVE_SNAPSHOT_FINISHED_FAILURE(357), Correlation ID: > 06a9efa4-1b80-4021-bf3e-41ecebe58a88, Job ID: c9a918a7-b00c-43cf-b6de-3659ac0765da, > Call Stack: null, Custom ID: null, Custom Event ID: -1, Message: Failed to > delete snapshot 'AUTO_7D_zz_nil_20180209_220002' for VM 'zz_nil'. > 2018-02-20 22:24:46,266+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (DefaultQuartzScheduler3) [516079c3] SPMAsyncTask::PollTask: Polling task > '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command 'DestroyImage', > Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > returned status 'finished', result 'success'. > 2018-02-20 22:24:46,267+01 INFO [org.ovirt.engine.core.bll.tasks.SPMAsyncTask] > (DefaultQuartzScheduler3) [516079c3] BaseAsyncTask::onTaskEndSuccess: > Task '34137342-4f30-476d-b16c-1cb7e0ea0ac0' (Parent Command > 'DestroyImage', Parameters Type 'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters') > ended successfully. > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endActionIfNecessary: > All tasks of command 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6' has ended -> > executing 'endAction' > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (DefaultQuartzScheduler3) [516079c3] CommandAsyncTask::endAction: Ending > action for '1' tasks (command ID: 'fe8c91f2-386b-4b3f-bbf3-aeda8e9244c6'): > calling endAction '. > 2018-02-20 22:24:46,268+01 INFO [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (org.ovirt.thread.pool-6-thread-20) [516079c3] CommandAsyncTask::endCommandAction > [within thread] context: Attempting to endAction 'DestroyImage', > 2018-02-20 22:24:46,269+01 ERROR [org.ovirt.engine.core.bll.tasks.CommandAsyncTask] > (org.ovirt.thread.pool-6-thread-20) [516079c3] [within thread]: endAction > for action type DestroyImage threw an exception.: > java.lang.NullPointerException > at org.ovirt.engine.core.bll.tasks.CoCoAsyncTaskHelper. > endAction(CoCoAsyncTaskHelper.java:335) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandCoordinatorImpl. > endAction(CommandCoordinatorImpl.java:340) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask. > endCommandAction(CommandAsyncTask.java:154) [bll.jar:] > at org.ovirt.engine.core.bll.tasks.CommandAsyncTask.lambda$ > endActionIfNecessary$0(CommandAsyncTask.java:106) [bll.jar:] > at org.ovirt.engine.core.utils.threadpool.ThreadPoolUtil$ > InternalWrapperRunnable.run(ThreadPoolUtil.java:84) [utils.jar:] > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > [rt.jar:1.8.0_161] > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > [rt.jar:1.8.0_161] > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > [rt.jar:1.8.0_161] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > [rt.jar:1.8.0_161] > at java.lang.Thread.run(Thread.java:748) [rt.jar:1.8.0_161] > > ----- Mail original ----- > De: "Shani Leviim" > ?: "Lionel Caignec" > Envoy?: Lundi 26 F?vrier 2018 14:42:38 > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > Yes, please. > Can you detail a bit more regarding the actions you've done? > > I'm assuming that since the snapshot had no description, trying to operate > it caused the nullPointerException you've got. > But I want to examine what was the cause for that. > > Also, can you please answer back to the list? > > > > *Regards,* > > *Shani Leviim* > > On Mon, Feb 26, 2018 at 3:37 PM, Lionel Caignec wrote: > > > Version is 4.1.7.6-1 > > > > Do you want the log from the day i delete snapshot? > > > > ----- Mail original ----- > > De: "Shani Leviim" > > ?: "Lionel Caignec" > > Cc: "users" > > Envoy?: Lundi 26 F?vrier 2018 14:29:16 > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > Hi, > > > > What is your engine version, please? > > I'm trying to reproduce your steps, for understanding better was is the > > cause for that error. Therefore, a full engine log is needed. > > Can you please attach it? > > > > Thanks, > > > > > > *Shani Leviim* > > > > On Mon, Feb 26, 2018 at 2:48 PM, Lionel Caignec > wrote: > > > > > Hi > > > > > > 1) this is error message from ui.log > > > > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > > server.gwt.OvirtRemoteLoggingService] (default task-3) [] Permutation > > > name: 8C01181C3B121D0AAE1312275CC96415 > > > 2018-02-26 13:44:10,001+01 ERROR [org.ovirt.engine.ui.frontend. > > server.gwt.OvirtRemoteLoggingService] > > > (default task-3) [] Uncaught exception: com.google.gwt.core.client. > > JavaScriptException: > > > (TypeError) > > > __gwt$exception: : Cannot read property 'F' of null > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > DisksAllocationModel$3.$onSuccess(DisksAllocationModel.java:120) > > > at org.ovirt.engine.ui.uicommonweb.models.storage. > > > DisksAllocationModel$3.onSuccess(DisksAllocationModel.java:120) > > > at org.ovirt.engine.ui.frontend.Frontend$2.$onSuccess( > > Frontend.java:233) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.Frontend$2.onSuccess(Frontend. > > java:233) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > OperationProcessor$2.$onSuccess(OperationProcessor.java:139) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > OperationProcessor$2.onSuccess(OperationProcessor.java:139) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > GWTRPCCommunicationProvider$5$1.$onSuccess( > GWTRPCCommunicationProvider. > > java:269) > > > [frontend.jar:] > > > at org.ovirt.engine.ui.frontend.communication. > > > GWTRPCCommunicationProvider$5$1.onSuccess(GWTRPCCommunicationProvider. > > java:269) > > > [frontend.jar:] > > > at com.google.gwt.user.client.rpc.impl.RequestCallbackAdapter. > > > onResponseReceived(RequestCallbackAdapter.java:198) [gwt-servlet.jar:] > > > at com.google.gwt.http.client.Request.$fireOnResponseReceived( > > Request.java:237) > > > [gwt-servlet.jar:] > > > at com.google.gwt.http.client.RequestBuilder$1. > > onReadyStateChange(RequestBuilder.java:409) > > > [gwt-servlet.jar:] > > > at Unknown.eval(webadmin-0.js at 65) > > > at com.google.gwt.core.client.impl.Impl.apply(Impl.java:296) > > > [gwt-servlet.jar:] > > > at com.google.gwt.core.client.impl.Impl.entry0(Impl.java:335) > > > [gwt-servlet.jar:] > > > at Unknown.eval(webadmin-0.js at 54) > > > > > > > > > 2) This line seems to be about the bad disk : > > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > > > > > 3) Snapshot table is empty for the concerned vm_id. > > > > > > ----- Mail original ----- > > > De: "Shani Leviim" > > > ?: "Lionel Caignec" > > > Cc: "users" > > > Envoy?: Lundi 26 F?vrier 2018 13:31:23 > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > Hi Lionel, > > > > > > The error message you've mentioned sounds like a UI error. > > > Can you please attach your ui log? > > > > > > Also, on the data from 'images' table you've uploaded, can you describe > > > which line is the relevant disk? > > > > > > Finally (for now), in case the snapshot was deleted, can you please > > > validate it by viewing the output of: > > > $ select * from snapshots; > > > > > > > > > > > > *Regards,* > > > > > > *Shani Leviim* > > > > > > On Mon, Feb 26, 2018 at 9:20 AM, Lionel Caignec > > wrote: > > > > > > > Hi Shani, > > > > thank you for helping me with your reply, > > > > i juste make a little mistake on explanation. In fact it's the > snapshot > > > > does not exist anymore. This is the disk(s) relative to her wich > still > > > > exist, and perhaps LVM volume. > > > > So can i delete manually this disk in database? what about the lvm > > > volume? > > > > Is it better to recreate disk sync data and destroy old one? > > > > > > > > > > > > > > > > ----- Mail original ----- > > > > De: "Shani Leviim" > > > > ?: "Lionel Caignec" > > > > Cc: "users" > > > > Envoy?: Dimanche 25 F?vrier 2018 14:26:41 > > > > Objet: Re: [ovirt-users] Ghost Snapshot Disk > > > > > > > > Hi Lionel, > > > > > > > > You can try to delete that snapshot directly from the database. > > > > > > > > In case of using psql [1], once you've logged in to your database, > you > > > can > > > > run this query: > > > > $ select * from snapshots where vm_id = ''; > > > > This one would list the snapshots associated with a VM by its id. > > > > > > > > In case you don't have you vm_id, you can locate it by querying: > > > > $ select * from vms where vm_name = 'nil'; > > > > This one would show you some details about a VM by its name > (including > > > the > > > > vm's id). > > > > > > > > Once you've found the relevant snapshot, you can delete it by > running: > > > > $ delete from snapshots where snapshot_id = ''; > > > > This one would delete the desired snapshot from the database. > > > > > > > > Since it's a delete operation, I would suggest confirming the ids > > before > > > > executing it. > > > > > > > > Hope you've found it useful! > > > > > > > > [1] > > > > https://www.ovirt.org/documentation/install-guide/ > > > appe-Preparing_a_Remote_ > > > > PostgreSQL_Database_for_Use_with_the_oVirt_Engine/ > > > > > > > > > > > > *Regards,* > > > > > > > > *Shani Leviim* > > > > > > > > On Fri, Feb 23, 2018 at 9:25 AM, Lionel Caignec > > > wrote: > > > > > > > > > Hi, > > > > > > > > > > i've a problem with snapshot. On one VM i've a "snapshot" ghost > > without > > > > > name or uuid, only information is size (see attachment). In the > > > snapshot > > > > > tab there is no trace about this disk. > > > > > > > > > > In database (table images) i found this : > > > > > f242cc9a-56c1-4ae4-aef0-f75eb01f74b1 | 2018-01-17 22:02:00+01 | > > > > > 2748779069440 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > 22:01:20.5+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > | 2 | 4 | 17e26476-cecb-441d-a5f7- > > 46ab3ef387ee > > > | > > > > > 2018-01-17 22:01:29.663334+01 | 2018-01-19 08:40:14.345229+01 | f > > > | > > > > > 1 | 2 > > > > > 1c7650fa-542b-4ec2-83a1-d2c1c31be5fd | 2018-01-17 22:02:03+01 | > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-01-18 > > > > > 22:01:20.84+01 | 0dd2090c-3491-4fa1-98c3-54ae88be793c > > > > > | 2 | 4 | bf834a91-c69f-4d2c-b639- > > 116ed58296d8 > > > | > > > > > 2018-01-17 22:01:29.836133+01 | 2018-01-19 08:40:19.083508+01 | f > > > | > > > > > 1 | 2 > > > > > 8614b21f-c0de-40f2-b4fb-e5cf193b0743 | 2018-02-09 23:00:44+01 | > > > > > 5368709120000 | 00000000-0000-0000-0000-000000000000 | > > > > > 00000000-0000-0000-0000-000000000000 | 4 | 2018-02-16 > > > > > 23:00:02.855+01 | 390175dc-baf4-4831-936a-5ea68fa4c969 > > > > > > > > > > > > > > > But i does not know which line is my disk. Is it possible to > delete > > > > > directly into database? > > > > > Or is it better to dump my disk to another new and delete the > > > "corrupted > > > > > one"? > > > > > > > > > > Another thing, when i try to move the disk to another storage > > domain i > > > > > always get "uncaght exeption occured ..." and no error in > engine.log. > > > > > > > > > > > > > > > Thank you for helping. > > > > > > > > > > -- > > > > > Lionel Caignec > > > > > > > > > > _______________________________________________ > > > > > Users mailing list > > > > > Users at ovirt.org > > > > > http://lists.ovirt.org/mailman/listinfo/users > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From frolland at redhat.com Tue Feb 27 14:29:47 2018 From: frolland at redhat.com (Fred Rolland) Date: Tue, 27 Feb 2018 16:29:47 +0200 Subject: [ovirt-users] Can't move/copy VM disks between Data Centers In-Reply-To: <4023F78E-1E84-439B-B89A-718C366B2C80@starlett.lv> References: <0D950DC4-A8E3-4A39-B557-5E122AA38DE6@starlett.lv> <4023F78E-1E84-439B-B89A-718C366B2C80@starlett.lv> Message-ID: Hi, Just to make clear what you want to achieve: - DC1 - local storage - host1 - VMs - DC2 - local storage - host2 You want to move the VMs from DC1 to DC2. What you can do: - Add a shared storage domain to the DC#1 - Move VM disk from local SD to shared storage domain - Put shared storage domain to maintenance - Detach shared storage from DC1 - Attach shared storage to DC2 - Activate shared storage - You should be able to register the VM from the shared storage into the DC2 - If you want/need move disks from shared storage to local storage in DC2 Please test this flow with a dummy VM before doing on important VMs. Regards, Freddy On Mon, Feb 26, 2018 at 1:46 PM, Andrei Verovski wrote: > Hi, > > Thanks for clarification. I?m using 4.2. > Anyway, I have to define another data center with shared storage domain > (since data center with local storage domain can have only 1 host), and the > do what you have described. > > Is it possible to copy VM disks from 1 data center #1 local storage domain > to another data center #2 NFS storage domain, or need to use export storage > domain ? > > > > On 26 Feb 2018, at 13:30, Fred Rolland wrote: > > Hi, > Which version are you using? > > in 4.1 , the support of adding shared storage to local DC was added [1]. > You can copy/move disks to the shared storage domain, then detach the SD > and attach to another DC. > > In any case, you wont be able to live migrate VMs from the local DC, it is > not supported. > > Regards, > Fred > > [1] https://ovirt.org/develop/release-management/features/storage/ > sharedStorageDomainsAttachedToLocalDC/ > > On Fri, Feb 23, 2018 at 1:35 PM, Andrei V wrote: > >> Hi, >> >> I have oVirt setup, separate PC host engine + 2 nodes (#10 + #11) with >> local storage domains (internal RAIDs). >> 1st node #10 is currently active and can?t be turned off. >> >> Since oVirt doesn?t support more then 1 host in data center with local >> storage domain as described here: >> http://lists.ovirt.org/pipermail/users/2018-January/086118.html >> defined another data center with 1 node #11. >> >> Problem: >> 1) can?t copy or move VM disks from node #10 (even of inactive VMs) to >> node #11, this node is NOT being shown as possible destination. >> 2) can?t migrate active VMs to node #11. >> 3) Added NFS shares to data center #1 -> node #10, but can?t change data >> center #1 -> storage type to Shared, because this operation requires >> detachment of local storage domains, which is not possible, several VMs are >> active and can?t be stopped. >> >> VM disks placed on local storage domains because of performance >> limitations of our 1Gbit network. >> 2 VMs running our accounting/inventory control system, and are critical >> to NFS storage performance limits. >> >> How to solve this problem ? >> Thanks in advance. >> >> Andrei >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabrice.soler at ac-guadeloupe.fr Tue Feb 27 17:15:05 2018 From: fabrice.soler at ac-guadeloupe.fr (Fabrice SOLER) Date: Tue, 27 Feb 2018 13:15:05 -0400 Subject: [ovirt-users] Start VM automatically In-Reply-To: References: <167789eb-4035-a5dd-74a7-80aa41934072@ac-guadeloupe.fr> Message-ID: Hi, We want to install ovirt for every school in Guadeloupe (~70 schools) Each school has an internet connexion with a basic physical router (no vpn). We have bought one physical server for each school. We have installed an ovirt engine on our central site where we want to manage all the node which are in the school. In first, for the tests in laboratory, I have installed a node behind a routeur. The routeur forwarded all the traffic from the ovirt engine to the ovirmgmt ip address (NAT). It works but I cannot have access to the VM consoles : my Spice client try to access directly to the node private IP address. Is there a solution to resolv this ? In ovirt documentation, I read "Spice Proxy" and? "Websocket Proxy".? Could these features answer to this problematic ? In the ovirt engine, is it possible to provide a NAT address to the node ? then the VM consoles coud work. So, I give up the above architecture and move my ovirtmgmt ip address behind a VM routeur which has a tunneling IPSec to our central site. The access to the VM consoles is resolved but I discover that the VM does not start automaticaly whithout the engine (like vmware does). I hope you understand my problematic. Sincerely, -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From plord at intricatenetworks.com Tue Feb 27 22:40:39 2018 From: plord at intricatenetworks.com (Zip) Date: Tue, 27 Feb 2018 16:40:39 -0600 Subject: [ovirt-users] CORSFilter Web Admin Message-ID: Is there a way to make CORSFilter work for webadmin? I have tried using the: engine-config -l | grep CORS engine-config -s CORSSupport=true engine-config -s CORSAllowedOrigins=* service ovirt-engine restart A look at: engine-config -l | grep CORS Looks like support is only for REST API? - CORSSupport: "Enables CORS (Cross Origin Resource Sharing) support in RESTAPI.? I have also tried adding to /usr/share/ovirt-engine/engine.ear/webadmin.war/WEB-INF/web.xml CORSSupport org.ovirt.engine.core.utils.servlet.CORSSupportFilter CORSSupport /* But that just ends up in Server Errors https://pastebin.com/Q1JECzSw Thanks for any help ;) Zip -------------- next part -------------- An HTML attachment was scrubbed... URL: From rightkicktech at gmail.com Tue Feb 27 23:04:28 2018 From: rightkicktech at gmail.com (Alex K) Date: Wed, 28 Feb 2018 01:04:28 +0200 Subject: [ovirt-users] Start VM automatically In-Reply-To: References: <167789eb-4035-a5dd-74a7-80aa41934072@ac-guadeloupe.fr> Message-ID: You can overwrite the console ip at cluster -> console. Try to put there the external ip and have NAT correctly set. This should work. Alex On Feb 27, 2018 12:15, "Fabrice SOLER" wrote: > Hi, > > We want to install ovirt for every school in Guadeloupe (~70 schools) > > Each school has an internet connexion with a basic physical router (no > vpn). > > We have bought one physical server for each school. > > We have installed an ovirt engine on our central site where we want to > manage all the node which are in the school. > > In first, for the tests in laboratory, I have installed a node behind a > routeur. The routeur forwarded all the traffic from the ovirt engine to the > ovirmgmt ip address (NAT). > > It works but I cannot have access to the VM consoles : my Spice client try > to access directly to the node private IP address. Is there a solution to > resolv this ? In ovirt documentation, I read "Spice Proxy" and "Websocket > Proxy". Could these features answer to this problematic ? > > In the ovirt engine, is it possible to provide a NAT address to the node ? > then the VM consoles coud work. > > So, I give up the above architecture and move my ovirtmgmt ip address > behind a VM routeur which has a tunneling IPSec to our central site. The > access to the VM consoles is resolved but I discover that the VM does not > start automaticaly whithout the engine (like vmware does). > > I hope you understand my problematic. > > Sincerely, > -- > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fabrice SOLER.PNG Type: image/png Size: 16525 bytes Desc: not available URL: From junaid8756 at gmail.com Wed Feb 28 04:45:01 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Wed, 28 Feb 2018 09:45:01 +0500 Subject: [ovirt-users] Error installing ovirt node In-Reply-To: References: Message-ID: Thank you very much yedidyah, I update YUM repo and issue resolve. I am thankful to you for help and direction. On Tue, Feb 27, 2018 at 5:55 PM, Yedidyah Bar David wrote: > On Tue, Feb 27, 2018 at 2:49 PM, Junaid Jadoon > wrote: > > please guide which repo should i add on node server in order to resolve > this > > issue???? can u please send repo list. > > What's the output of each of these commands: > > rpm -qa | grep ovirt-release > > grep ovirt-4.2-epel /etc/yum.repos.d/* > > cat /etc/yum.repos.d/ovirt-*dependencies.repo > > yum install dmidecode > > Thanks, > > > > > > > > > On Tue, Feb 27, 2018 at 5:46 PM, Yedidyah Bar David > wrote: > >> > >> On Tue, Feb 27, 2018 at 2:38 PM, Junaid Jadoon > >> wrote: > >> > Thanks Yedidyah David for reply. > >> > > >> > Please confirm where check should the repo file either host engine or > >> > node > >> > server??? > >> > >> On the node, please. > >> > >> > > >> > > >> > > >> > On Tue, Feb 27, 2018 at 5:31 PM, Yedidyah Bar David > >> > wrote: > >> >> > >> >> On Tue, Feb 27, 2018 at 2:17 PM, Junaid Jadoon > > >> >> wrote: > >> >> > Dear All, > >> >> > I have Ovirt engine 4.2 and node version is 4.2. > >> >> > > >> >> > After installing node in in ovirt engine when i try to install node > >> >> > it > >> >> > gives > >> >> > following error > >> >> > 14:25:37,410+05 ERROR > >> >> > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Host > >> >> > installation > >> >> > failed for host 'bd8d007a-be92-4075-bba9-6cbeb890a1e5', 'node_2': > >> >> > Command > >> >> > returned failure code 1 during SSH session 'root at 192.168.20.20' > >> >> > 2018-02-27 14:25:37,416+05 INFO > >> >> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] START, > >> >> > SetVdsStatusVDSCommand(HostName = node_2, > >> >> > > >> >> > > >> >> > SetVdsStatusVDSCommandParameters:{hostId='bd8d007a-be92- > 4075-bba9-6cbeb890a1e5', > >> >> > status='InstallFailed', nonOperationalReason='NONE', > >> >> > stopSpmFailureLogged='false', maintenanceReason='null'}), log id: > >> >> > 2b138e87 > >> >> > 2018-02-27 14:25:37,423+05 INFO > >> >> > [org.ovirt.engine.core.vdsbroker.SetVdsStatusVDSCommand] > >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] FINISH, > >> >> > SetVdsStatusVDSCommand, log id: 2b138e87 > >> >> > 2018-02-27 14:25:37,429+05 ERROR > >> >> > > >> >> > [org.ovirt.engine.core.dal.dbbroker.auditloghandling. > AuditLogDirector] > >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] EVENT_ID: > >> >> > VDS_INSTALL_FAILED(505), Host node_2 installation failed. Command > >> >> > returned > >> >> > failure code 1 during SSH session 'root at 192.168.20.20'. > >> >> > 2018-02-27 14:25:37,433+05 INFO > >> >> > [org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] > >> >> > (EE-ManagedThreadFactory-engine-Thread-19) [52669850] Lock freed > to > >> >> > object > >> >> > > >> >> > > >> >> > 'EngineLock:{exclusiveLocks='[bd8d007a-be92-4075-bba9- > 6cbeb890a1e5=VDS]', > >> >> > sharedLocks=''}' > >> >> > > >> >> > > >> >> > I have attached log file for your reference > >> >> > >> >> The relevant part is: > >> >> > >> >> 2018-02-27 12:54:56,909+05 ERROR > >> >> [org.ovirt.engine.core.dal.dbbroker.auditloghandling. > AuditLogDirector] > >> >> (VdsDeploy) [719d5f5d] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), > >> >> An error has occurred during installation of Host uoi_node2: Yum > >> >> Cannot queue package dmidecode: Cannot retrieve metalink for > >> >> repository: ovirt-4.2-epel/x86_64. Please verify its path and try > >> >> again. > >> >> > >> >> Please check/share yum repos on the host. What happens if you run > >> >> there 'yum install dmidecode'? > >> >> It might be a specific bad mirror, or a bad proxy etc. You can edit > >> >> the repo file to point at > >> >> another specific mirror instead of using mirrorlist. > >> >> > >> >> Best regards, > >> >> -- > >> >> Didi > >> > > >> > > >> > >> > >> > >> -- > >> Didi > > > > > > > > -- > Didi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolas at ecarnot.net Wed Feb 28 07:46:23 2018 From: nicolas at ecarnot.net (Nicolas Ecarnot) Date: Wed, 28 Feb 2018 08:46:23 +0100 Subject: [ovirt-users] Hosts firewall custom setup In-Reply-To: References: <204e4d41-cb9d-832c-7567-a8516914d99b@ecarnot.net> <13a2e235-8610-8ec2-1c3d-5b82ed5351f3@ecarnot.net> <08a7c586-0b61-42aa-9784-2d2e8b4ea7d8@ecarnot.net> Message-ID: <2b661be9-42a1-e035-19fc-423c6985ee9c@ecarnot.net> Hello, For the record : The workaround you suggest below is successful. Thank you. -- Nicolas Ecarnot Le 27/02/2018 ? 14:15, Ondra Machacek a ?crit?: > > > On 02/27/2018 11:29 AM, Nicolas Ecarnot wrote: >> Le 26/02/2018 ? 15:00, Yedidyah Bar David a ?crit?: >>>> But how do we add custom rules in case of firewalld type? >>> >>> Please see: https://ovirt.org/blog/2017/12/host-deploy-customization/ >> Hello Didi and al, >> >> - I followed the advices found in this blog page, I created the exact >> same filename with the adequate content. >> - I've setup the cluster type to firewalld >> - I restarted ovirt-engine >> - I reinstalled a host >> >> I see no usage of this Ansible yml file. >> I see the creation of an ansible deploy log file for my host, and I >> see the usual firewall ports being opened, but I see nowhere any usage >> of the /etc/ovirt-engine/ansible/ovirt-host-deploy-post-tasks.yml file. >> - I added the debug msg part in the ansible recipe, but to no avail. >> - Huge grepping through the /var/log of the engine shows no calls of >> this script. >> >> Thus, I see no effect on ports of the host's firewalld config. >> >> What should I look at now? > > It looks like you hit the following bug: > > ?https://bugzilla.redhat.com/show_bug.cgi?id=1549163 > > It will be fixed in 4.2.2 release. > > I believe you can meanwhile remove line: > > ?- oVirt-metrics > > from file: > > /usr/share/ovirt-engine/playbooks/roles/ovirt-host-deploy/meta/main.yml > >> >> Thank you. >> From mperina at redhat.com Wed Feb 28 08:30:51 2018 From: mperina at redhat.com (Martin Perina) Date: Wed, 28 Feb 2018 09:30:51 +0100 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: On Wed, Feb 28, 2018 at 9:13 AM, Terry hey wrote: > Dear Martin, > Please see the following result. > [root at XXXXX ~]# fence_ilo4 -a XXX.XXX.XXX.XXX -l XXXXX -p XXXXX -v -o > status > Executing: /usr/bin/ipmitool -I lanplus -H XXX.XXX.XXX.XXX -p 623 -U XXXXX > -P XXXXX -L ADMINISTRATOR chassis power status > > Connection timed out > > > [root at XXXXX~]# > As you can see it just said connection timed out. > But i can actually access iLO5 ( same account and password) through > Internet Explorer , > ?This is completely different protocol (HTTP) using a browser, it's independent of IPMI. Are you sure that some firewall doesn't block access to the IPMI interface? Are you executing the command from different host than the host which you want access the IPMI interface of? ? ?If above is not an issue, then please login to iLO5 management using a browser and check if IPMI interface is enabled according to your iLO5 documentation ? > > I want to ask.. do you know what port did the manger use when compile this > command? > ?623 is the default IPMI port ? > > Regards > Terry > > > 2018-02-26 17:38 GMT+08:00 Martin Perina : > >> >> >> On Fri, Feb 23, 2018 at 11:34 AM, Terry hey >> wrote: >> >>> Dear Martin, >>> I am very sorry that i reply you so late. >>> Do you mean that 4.2 can support ilo5 by selecting the option "ilo4" in >>> power management? >>> >> >> ?Yes >> ? >> >> >>> "from the error message below I'd say that you are either not using >>> correct IP address of iLO5 interface or you haven't enabled remote access >>> to your iLO5 interface" >>> I just try it and double confirm that i did not type a wrong IP. But the >>> error message is same. >>> >> >> ?Unfortunately I don't have iLO5 server available, so I cannot provide >> more details. Anyway could you please double check your server >> documentation, that you have enabled access to iLO5 IPMI interface >> correctly? And could you please share output of following command? >> >> ? >> f >> ?? >> ence_ilo4 -a -l -p -v -o status >> >> Thanks >> >> Martin >> ? >> >> >>> >>> Regards >>> Terry >>> >>> 2018-02-08 16:13 GMT+08:00 Martin Perina : >>> >>>> Hi Terry, >>>> >>>> from the error message below I'd say that you are either not using >>>> correct IP address of iLO5 interface or you haven't enabled remote access >>>> to your iLO5 interface. >>>> According to [1] iLO5 should fully IPMI compatible. So are you sure >>>> that you enabled the remote access to your iLO5 address in iLO5 management? >>>> Please consult [1] how to enable everything and use a user with at >>>> least Operator privileges. >>>> >>>> Regards >>>> >>>> Martin >>>> >>>> [1] https://support.hpe.com/hpsc/doc/public/display?docId=a00018 >>>> 324en_us >>>> >>>> >>>> On Thu, Feb 8, 2018 at 7:57 AM, Terry hey >>>> wrote: >>>> >>>>> Dear Martin, >>>>> >>>>> Thank you for helping me. To answer your question, >>>>> 1. Does the Test in Edit fence agent dialog work?? >>>>> Ans: it shows that "Test failed: Internal JSON-RPC error" >>>>> >>>>> Regardless the fail result, i press "OK" to enable power management. >>>>> There are four event log appear in "Events" >>>>> ********************************The follwing are the log in >>>>> "Event""******************************** >>>>> Host host01 configuration was updated by admin at internal-authz. >>>>> Kdump integration is enabled for host hostv01, but kdump is not >>>>> configured properly on host. >>>>> Health check on Host host01 indicates that future attempts to Stop >>>>> this host using Power-Management are expected to fail. >>>>> Health check on Host host01 indicates that future attempts to Start >>>>> this host using Power-Management are expected to fail. >>>>> >>>>> 2. If not could you please try to install fence-agents-all package on >>>>> different host and execute? >>>>> Ans: It just shows "Connection timed out". >>>>> >>>>> So, does it means that it is not support iLo5 now or i configure >>>>> wrongly? >>>>> >>>>> Regards, >>>>> Terry >>>>> >>>>> 2018-02-02 15:46 GMT+08:00 Martin Perina : >>>>> >>>>>> >>>>>> >>>>>> On Fri, Feb 2, 2018 at 5:40 AM, Terry hey >>>>>> wrote: >>>>>> >>>>>>> Dear Martin, >>>>>>> >>>>>>> Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to setup >>>>>>> oVirt Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. Therefore, i >>>>>>> would like to ask whether oVirt power management support iLO5 or not. >>>>>>> >>>>>> >>>>>> ?We don't have any hardware with iLO5 available, but there is a good >>>>>> chance that it will be compatible with iLO4. Have you tried to setup your >>>>>> server with iLO4? Does the Test in Edit fence agent dialog work?? If not >>>>>> could you please try to install fence-agents-all package on different host >>>>>> and execute following: >>>>>> >>>>>> ?? >>>>>> f >>>>>> ?? >>>>>> ence_ilo4 -a -l -p -v -o status >>>>>> >>>>>> and share the output? >>>>>> >>>>>> Thanks >>>>>> >>>>>> Martin >>>>>> >>>>>> >>>>>>> If not, do you have any idea to setup power management with HP G10? >>>>>>> >>>>>>> Regards, >>>>>>> Terry >>>>>>> >>>>>>> 2018-02-01 16:21 GMT+08:00 Martin Perina : >>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < >>>>>>>> lorenzetto.luca at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. >>>>>>>>> Try using the standard ipmi. >>>>>>>>> >>>>>>>> >>>>>>>> ?It's not just an alias, ilo3/ilo4 also have different defaults >>>>>>>> than ipmilan. For example if you use ilo4, then by default following is >>>>>>>> used: >>>>>>>> >>>>>>>> ? >>>>>>>> >>>>>>>> ?lanplus=1 >>>>>>>> power_wait=4 >>>>>>>> >>>>>>>> ?So I recommend to start with ilo4 and add any necessary custom >>>>>>>> options into Options field. If you need some custom >>>>>>>> options, could you please share them with us? It would be very >>>>>>>> helpful for us, if needed we could introduce ilo5 with >>>>>>>> different defaults then ilo4 >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> Martin >>>>>>>> >>>>>>>> >>>>>>>>> Luca >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> Il 31 gen 2018 11:14 PM, "Terry hey" ha >>>>>>>>> scritto: >>>>>>>>> >>>>>>>>>> Dear all, >>>>>>>>>> Did oVirt 4.2 Power management support iLO5 as i could not see >>>>>>>>>> iLO5 option in Power Management. >>>>>>>>>> >>>>>>>>>> Regards >>>>>>>>>> Terry >>>>>>>>>> >>>>>>>>>> _______________________________________________ >>>>>>>>>> Users mailing list >>>>>>>>>> Users at ovirt.org >>>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>>> >>>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Martin Perina >>>>>>>> Associate Manager, Software Engineering >>>>>>>> Red Hat Czech s.r.o. >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Martin Perina >>>>>> Associate Manager, Software Engineering >>>>>> Red Hat Czech s.r.o. >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> Martin Perina >>>> Associate Manager, Software Engineering >>>> Red Hat Czech s.r.o. >>>> >>> >>> >> >> >> -- >> Martin Perina >> Associate Manager, Software Engineering >> Red Hat Czech s.r.o. >> > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mperina at redhat.com Wed Feb 28 08:37:56 2018 From: mperina at redhat.com (Martin Perina) Date: Wed, 28 Feb 2018 09:37:56 +0100 Subject: [ovirt-users] Power Management - Supermicro SuperBlade In-Reply-To: <4B032C77-E7F6-4F2A-888D-5A2836CFE84E@sourcemirrors.org> References: <4B032C77-E7F6-4F2A-888D-5A2836CFE84E@sourcemirrors.org> Message-ID: On Tue, Feb 27, 2018 at 10:13 PM, Scott Harvanek wrote: > Well I can get all that the issue is how to I specify the blade ID to the > fence agent? Since we don?t want to power cycle the entire shelf > ?I haven't seen this hardware, but generally there are 2 possibilities: 1. Withing your SuperBlade management you need to specify unique IP address for IPMI interface of each host 2. If 1. is not possible, but you have other identification of a host, then you can try to pass that value using '-n' option on command line or 'plug=XXX' in Options field of a Fence Agent in webadmin Martin > -Scott H > > On Feb 26, 2018, at 3:34 AM, Martin Perina wrote: > > > > On Sun, Feb 25, 2018 at 7:53 AM, Scott Harvanek > wrote: > >> Hoping someone can help here, I've looked and can't find any examples on >> this. >> >> I've got some SuperBlade chassis and the blades are managed via the >> chassis controller. What is the proper way to configure power management >> then via the controller? You can control individual blades via the >> SMCIPMItool but I'm not entirely sure how to configure that inside of Ovirt >> for power management, does anyone have any experience on this or can point >> me to some good docs? >> > > ?According to [1] those servers should support IPMI, so you could try > ipmilan fence agent and most probably try to add lanplus=1 into Options > field of an agent. If it doesn't work as expected, could you please try to > execute below commands and share the output? > > fence_ipmilan -a -l -p -P > -vvv -o status > > Thanks > > Martin > > > [1] https://www.supermicro.com/products/SuperBlade/management/? > > > >> Cheers! >> >> Scott H. >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > > > -- > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > > -- Martin Perina Associate Manager, Software Engineering Red Hat Czech s.r.o. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcoc at prismatelecomtesting.com Tue Feb 27 09:53:06 2018 From: marcoc at prismatelecomtesting.com (Marco Lorenzo Crociani) Date: Tue, 27 Feb 2018 10:53:06 +0100 Subject: [ovirt-users] ovirt 4.1 - skylake - no avx512 support in virtual machines In-Reply-To: <71aff973-8971-e274-3d8e-0b6fdbd03b2d@redhat.com> References: <71aff973-8971-e274-3d8e-0b6fdbd03b2d@redhat.com> Message-ID: > Skylake-Client does _not_ have AVX512 (I tried now on a Kaby Lake Core > i7 laptop). Only Skylake-Server has it and it will be in RHEL 7.5. > > Thanks, > > Paolo > Ok, we'll stay with pass-through until RHEL 7.5. Thanks, -- Marco Crociani Prisma Telecom Testing S.r.l. via Petrocchi, 4 20127 MILANO ITALY Phone: +39 02 26113507 Fax: +39 02 26113597 e-mail: marcoc at prismatelecomtesting.com web: http://www.prismatelecomtesting.com From tbaror at gmail.com Tue Feb 27 13:21:42 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Tue, 27 Feb 2018 15:21:42 +0200 Subject: [ovirt-users] Cannot activate host from maintenance mode Message-ID: Hello, I have Ovirt Version:4.2.1.7-1.el7.centos, I did upgrade according to host indication ,and since then I get the following error when trying to activate host " Cannot activate Host. Host has no unique id. " Any idea how to fix this issue, please advice Thanks -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From tbaror at gmail.com Tue Feb 27 13:32:33 2018 From: tbaror at gmail.com (Tal Bar-Or) Date: Tue, 27 Feb 2018 15:32:33 +0200 Subject: [ovirt-users] Before migrating to Ovirt In-Reply-To: References: Message-ID: Thanks for the clear answer On Mon, Feb 26, 2018 at 5:13 PM, Yedidyah Bar David wrote: > On Fri, Feb 23, 2018 at 6:33 PM, Tal Bar-Or wrote: > > Hello Ovirt users, > > > > Currently we haveing 4 Xen pools in our organization each pool have 8 > > servers. > > Due to new 7.3 version change ,we plan to migrate our upcoming 5th pool > to > > Ovirt, the decision to do POC migration to Ovirt is from lots of Xen > users > > that suggested Ovirt migration better and mature product migrate to it. > > > > I started to test Ovirt currently only with one server with engine > > installed, my question regarding Ovirt sine i am really newbie with that > > system , is , is it possible to have multiple engine on it ? > > What happen if the engine server crash with some reason? can i load it on > > another cluster server? > > Please advice > > The standard way to have HA for oVirt engine is to set it up as a > self-hosted-engine and have more than one host in the hosted-engine > cluster. > > We do know that there are people doing HA using other means, based on > external > HA software (heartbeat), but that's more expensive (in hardware, work, > and perhaps software, depends on what you use) - I'd suggest doing that > only if your organization already has in-house expertise and experience > with such software. > > Good luck and best regards, > -- > Didi > -- Tal Bar-or -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Wed Feb 28 07:19:34 2018 From: recreationh at gmail.com (Terry hey) Date: Wed, 28 Feb 2018 15:19:34 +0800 Subject: [ovirt-users] VM paused rather than migrate to another hosts Message-ID: Dear all, I am testing iSCSI bonding failover test on oVirt, but i observed that VM were paused and did not migrate to another host. Please see the details as follows. I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 cannot support iLO 5, thus i cannot setup power management. For the cluster setting, I set "Migrate Virtual Machines" under the Migration Policy. For each hypervisor, I bonded two iSCSI interface as bond 1. I created one Virtual machine and enable high availability on it. Also, I created one Virtual machine and did not enable high availability on it. When i shutdown one of the iSCSI interface, nothing happened. But when i shutdown both iSCSI interface, VM in that hosts were paused and did not migrate to another hosts. Is this behavior normal or i miss something? Thank you all of you. Regards, Terry Hung -------------- next part -------------- An HTML attachment was scrubbed... URL: From recreationh at gmail.com Wed Feb 28 08:13:57 2018 From: recreationh at gmail.com (Terry hey) Date: Wed, 28 Feb 2018 16:13:57 +0800 Subject: [ovirt-users] Power management - oVirt 4,2 In-Reply-To: References: Message-ID: Dear Martin, Please see the following result. [root at XXXXX ~]# fence_ilo4 -a XXX.XXX.XXX.XXX -l XXXXX -p XXXXX -v -o status Executing: /usr/bin/ipmitool -I lanplus -H XXX.XXX.XXX.XXX -p 623 -U XXXXX -P XXXXX -L ADMINISTRATOR chassis power status Connection timed out [root at XXXXX~]# As you can see it just said connection timed out. But i can actually access iLO5 ( same account and password) through Internet Explorer , I want to ask.. do you know what port did the manger use when compile this command? Regards Terry 2018-02-26 17:38 GMT+08:00 Martin Perina : > > > On Fri, Feb 23, 2018 at 11:34 AM, Terry hey wrote: > >> Dear Martin, >> I am very sorry that i reply you so late. >> Do you mean that 4.2 can support ilo5 by selecting the option "ilo4" in >> power management? >> > > ?Yes > ? > > >> "from the error message below I'd say that you are either not using >> correct IP address of iLO5 interface or you haven't enabled remote access >> to your iLO5 interface" >> I just try it and double confirm that i did not type a wrong IP. But the >> error message is same. >> > > ?Unfortunately I don't have iLO5 server available, so I cannot provide > more details. Anyway could you please double check your server > documentation, that you have enabled access to iLO5 IPMI interface > correctly? And could you please share output of following command? > > ? > f > ?? > ence_ilo4 -a -l -p -v -o status > > Thanks > > Martin > ? > > >> >> Regards >> Terry >> >> 2018-02-08 16:13 GMT+08:00 Martin Perina : >> >>> Hi Terry, >>> >>> from the error message below I'd say that you are either not using >>> correct IP address of iLO5 interface or you haven't enabled remote access >>> to your iLO5 interface. >>> According to [1] iLO5 should fully IPMI compatible. So are you sure that >>> you enabled the remote access to your iLO5 address in iLO5 management? >>> Please consult [1] how to enable everything and use a user with at least >>> Operator privileges. >>> >>> Regards >>> >>> Martin >>> >>> [1] https://support.hpe.com/hpsc/doc/public/display?docId=a00018324en_us >>> >>> >>> On Thu, Feb 8, 2018 at 7:57 AM, Terry hey wrote: >>> >>>> Dear Martin, >>>> >>>> Thank you for helping me. To answer your question, >>>> 1. Does the Test in Edit fence agent dialog work?? >>>> Ans: it shows that "Test failed: Internal JSON-RPC error" >>>> >>>> Regardless the fail result, i press "OK" to enable power management. >>>> There are four event log appear in "Events" >>>> ********************************The follwing are the log in >>>> "Event""******************************** >>>> Host host01 configuration was updated by admin at internal-authz. >>>> Kdump integration is enabled for host hostv01, but kdump is not >>>> configured properly on host. >>>> Health check on Host host01 indicates that future attempts to Stop this >>>> host using Power-Management are expected to fail. >>>> Health check on Host host01 indicates that future attempts to Start >>>> this host using Power-Management are expected to fail. >>>> >>>> 2. If not could you please try to install fence-agents-all package on >>>> different host and execute? >>>> Ans: It just shows "Connection timed out". >>>> >>>> So, does it means that it is not support iLo5 now or i configure >>>> wrongly? >>>> >>>> Regards, >>>> Terry >>>> >>>> 2018-02-02 15:46 GMT+08:00 Martin Perina : >>>> >>>>> >>>>> >>>>> On Fri, Feb 2, 2018 at 5:40 AM, Terry hey >>>>> wrote: >>>>> >>>>>> Dear Martin, >>>>>> >>>>>> Um..Since i am going to use HPE ProLiant DL360 Gen10 Server to setup >>>>>> oVirt Node(Hypervisor). HP G10 is using ilo5 rather than ilo4. Therefore, i >>>>>> would like to ask whether oVirt power management support iLO5 or not. >>>>>> >>>>> >>>>> ?We don't have any hardware with iLO5 available, but there is a good >>>>> chance that it will be compatible with iLO4. Have you tried to setup your >>>>> server with iLO4? Does the Test in Edit fence agent dialog work?? If not >>>>> could you please try to install fence-agents-all package on different host >>>>> and execute following: >>>>> >>>>> ?? >>>>> f >>>>> ?? >>>>> ence_ilo4 -a -l -p -v -o status >>>>> >>>>> and share the output? >>>>> >>>>> Thanks >>>>> >>>>> Martin >>>>> >>>>> >>>>>> If not, do you have any idea to setup power management with HP G10? >>>>>> >>>>>> Regards, >>>>>> Terry >>>>>> >>>>>> 2018-02-01 16:21 GMT+08:00 Martin Perina : >>>>>> >>>>>>> >>>>>>> >>>>>>> On Wed, Jan 31, 2018 at 11:19 PM, Luca 'remix_tj' Lorenzetto < >>>>>>> lorenzetto.luca at gmail.com> wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> From ilo3 and up, ilo fencing agents are an alias for fence_ipmi. >>>>>>>> Try using the standard ipmi. >>>>>>>> >>>>>>> >>>>>>> ?It's not just an alias, ilo3/ilo4 also have different defaults than >>>>>>> ipmilan. For example if you use ilo4, then by default following is used: >>>>>>> >>>>>>> ? >>>>>>> >>>>>>> ?lanplus=1 >>>>>>> power_wait=4 >>>>>>> >>>>>>> ?So I recommend to start with ilo4 and add any necessary custom >>>>>>> options into Options field. If you need some custom >>>>>>> options, could you please share them with us? It would be very >>>>>>> helpful for us, if needed we could introduce ilo5 with >>>>>>> different defaults then ilo4 >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> Martin >>>>>>> >>>>>>> >>>>>>>> Luca >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> Il 31 gen 2018 11:14 PM, "Terry hey" ha >>>>>>>> scritto: >>>>>>>> >>>>>>>>> Dear all, >>>>>>>>> Did oVirt 4.2 Power management support iLO5 as i could not see >>>>>>>>> iLO5 option in Power Management. >>>>>>>>> >>>>>>>>> Regards >>>>>>>>> Terry >>>>>>>>> >>>>>>>>> _______________________________________________ >>>>>>>>> Users mailing list >>>>>>>>> Users at ovirt.org >>>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>>> >>>>>>>>> >>>>>>>> _______________________________________________ >>>>>>>> Users mailing list >>>>>>>> Users at ovirt.org >>>>>>>> http://lists.ovirt.org/mailman/listinfo/users >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Martin Perina >>>>>>> Associate Manager, Software Engineering >>>>>>> Red Hat Czech s.r.o. >>>>>>> >>>>>> >>>>>> >>>>> >>>>> >>>>> -- >>>>> Martin Perina >>>>> Associate Manager, Software Engineering >>>>> Red Hat Czech s.r.o. >>>>> >>>> >>>> >>> >>> >>> -- >>> Martin Perina >>> Associate Manager, Software Engineering >>> Red Hat Czech s.r.o. >>> >> >> > > > -- > Martin Perina > Associate Manager, Software Engineering > Red Hat Czech s.r.o. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jm3185951 at gmail.com Wed Feb 28 10:21:44 2018 From: jm3185951 at gmail.com (Jonathan Mathews) Date: Wed, 28 Feb 2018 12:21:44 +0200 Subject: [ovirt-users] Failure to upgrade Cluster Compatibility Version Message-ID: I have been upgrading my oVirt platform from 3.4 and I am trying to get to 4.2. I have managed to get the platform to 3.6, but need to upgrade the Cluster Compatibility Version. When I select 3.6 in the Cluster Compatibility Version and select OK, it highlights Compatibility Version in red, (image attached). There are no errors been displayed on screen, or in the /var/log/ovirt-engine/engine.log file. Please let me know if I am missing something and how I can resolve this? Thanks Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: oVirt issue.png Type: image/png Size: 83385 bytes Desc: not available URL: From phoracek at redhat.com Wed Feb 28 11:30:37 2018 From: phoracek at redhat.com (Petr Horacek) Date: Wed, 28 Feb 2018 12:30:37 +0100 Subject: [ovirt-users] Network and VLANs In-Reply-To: References: Message-ID: Hello, I don't see how is the bug related to your problem, could you elaborate please? It seems to be fixed now. Is there any other problem with Setup Networks other than the bug? oVirt does not support Virtual Switch neither Network Stack, there is an experimental support of Open vSwitch and OVN though. Regards, Petr 2018-02-22 10:52 GMT+01:00 Gabriel Stein : > s/ wrote this Bug about my Problem/There is a Bug with my Problem"/g > > > > Gabriel Stein > ------------------------------ > Gabriel Ferraz Stein > Tel.: +49 (0) 170 2881531 > > 2018-02-22 10:50 GMT+01:00 Gabriel Stein : > >> Hi all, >> >> I have some problems adding VLANs to my VMs and I don't known if there >> is a better way to do that, like a 'oVirt Way'. >> >> All I need is to have a VM on "Test Network" that communicates with >> another hardware/VMs on "Test Network". All VLANs are configured on my >> Switch, the Hosts from oVirt are connected and tagged to this VLANs. >> >> Is there a "oVirt Way" to do that other than "Setup Networks"? Can I use >> with oVirt an Virtual Switch? Or a Network Stack? >> >> I wrote this Bug about my Problem... >> >> Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1540463 >> >> Thanks in Advance! >> >> Best Regards, >> >> Gabriel >> >> >> >> Gabriel Stein >> ------------------------------ >> Gabriel Ferraz Stein >> Tel.: +49 (0) 170 2881531 >> > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From phoracek at redhat.com Wed Feb 28 11:58:24 2018 From: phoracek at redhat.com (Petr Horacek) Date: Wed, 28 Feb 2018 12:58:24 +0100 Subject: [ovirt-users] Change management network In-Reply-To: <8e2a2323-8d72-c908-e23d-e5a49e1e0c41@bootc.boo.tc> References: <8e2a2323-8d72-c908-e23d-e5a49e1e0c41@bootc.boo.tc> Message-ID: Hello, 2018-02-23 14:31 GMT+01:00 Chris Boot : > On 22/02/18 17:15, Chris Boot wrote: > > Hi all, > > > > I have an oVirt cluster on which I need to change which VLAN is the > > management network. > > > > The new management network is an existing VM network. I've configured IP > > addresses for all the hosts on this network, and I've even moved the > > HostedEngine VM onto this network. So far so good. > > > > What I cannot seem to be able to do is actually change the "management > > network" toggle in the cluster to this network: the oVirt Engine > > complains saying: > > > > "Error while executing action: Cannot edit Network. Changing management > > network in a non-empty cluster is not allowed." > > > > How can I get around this? I clearly cannot empty the cluster, as the > > cluster contains all my existing VMs, hosts and HostedEngine. > > It seems I have to create a new cluster, migrate a host over, migrate a > few VMs, and so on until everything is moved over. This really isn't > ideal as the VMs have to be shut down and reconfigured, but doable. > There is no better way, management network cannot be changed once there are Hosts in Cluster. > > What I seem to be stuck on is changing the cluster on the HostedEngine. > I actually have it running on a host in the new cluster, but it still > appears in the old cluster on the web interface with no way to change this. > Martin, is such thing possible in HostedEngine? > > Any hints, please? > > This is on oVirt 4.1.9. Upgrading to 4.2.1 is not out of the question if > it's likely to help. > > Thanks, > Chris > > -- > Chris Boot > bootc at boo.tc > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > Petr -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabrielstein at gmail.com Wed Feb 28 12:03:25 2018 From: gabrielstein at gmail.com (Gabriel Stein) Date: Wed, 28 Feb 2018 13:03:25 +0100 Subject: [ovirt-users] Network and VLANs In-Reply-To: References: Message-ID: Hi! Well, the problem starts here: https://bugzilla.redhat.com/show_bug.cgi?id=1528906 This is a solved marked, but it isn't. This is what still happens on some servers from us: https://bugzilla.redhat.com/show_bug.cgi?id=1540463 I have four servers, I updated all hosts on the newest state from ovirt(4.2.1) and just for two servers worked. For other two doesn't works. The packages seems to be the same versions on all servers, for the "working" and "not working". The question about oVirt Networking was my curiosity, if I can solve this problem using another way/workaround(or even a officiall way from ovirt). If you need more info, I give all the info that you need or I can even create a bugzilla ticket for that. Thanks! All the best, Gabriel Gabriel Stein ------------------------------ Gabriel Ferraz Stein Tel.: +49 (0) 170 2881531 2018-02-28 12:30 GMT+01:00 Petr Horacek : > Hello, > > I don't see how is the bug related to your problem, could you elaborate > please? It seems to be fixed now. > > Is there any other problem with Setup Networks other than the bug? > > oVirt does not support Virtual Switch neither Network Stack, there is an > experimental support of Open vSwitch and OVN though. > > Regards, > Petr > > > > 2018-02-22 10:52 GMT+01:00 Gabriel Stein : > >> s/ wrote this Bug about my Problem/There is a Bug with my Problem"/g >> >> >> >> Gabriel Stein >> ------------------------------ >> Gabriel Ferraz Stein >> Tel.: +49 (0) 170 2881531 >> >> 2018-02-22 10:50 GMT+01:00 Gabriel Stein : >> >>> Hi all, >>> >>> I have some problems adding VLANs to my VMs and I don't known if there >>> is a better way to do that, like a 'oVirt Way'. >>> >>> All I need is to have a VM on "Test Network" that communicates with >>> another hardware/VMs on "Test Network". All VLANs are configured on my >>> Switch, the Hosts from oVirt are connected and tagged to this VLANs. >>> >>> Is there a "oVirt Way" to do that other than "Setup Networks"? Can I use >>> with oVirt an Virtual Switch? Or a Network Stack? >>> >>> I wrote this Bug about my Problem... >>> >>> Bug: https://bugzilla.redhat.com/show_bug.cgi?id=1540463 >>> >>> Thanks in Advance! >>> >>> Best Regards, >>> >>> Gabriel >>> >>> >>> >>> Gabriel Stein >>> ------------------------------ >>> Gabriel Ferraz Stein >>> Tel.: +49 (0) 170 2881531 >>> >> >> >> _______________________________________________ >> Users mailing list >> Users at ovirt.org >> http://lists.ovirt.org/mailman/listinfo/users >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From suporte at logicworks.pt Wed Feb 28 12:10:59 2018 From: suporte at logicworks.pt (suporte at logicworks.pt) Date: Wed, 28 Feb 2018 12:10:59 +0000 (WET) Subject: [ovirt-users] Backup & Restore Message-ID: <1044341911.20329570.1519819859027.JavaMail.zimbra@logicworks.pt> Hi, I'm testing backup & restore on Ovirt 4.2. I follow this doc https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/ Try to restore to a fresh installation but always get this error message: restore-permissions Preparing to restore: - Unpacking file 'back_file' Restoring: - Files Provisioning PostgreSQL users/databases: - user 'engine', database 'engine' Restoring: FATAL: Can't connect to database 'ovirt_engine_history'. Please see '/usr/bin/engine-backup --help'. On the live engine I run # engine-backup --scope=all --mode=backup --file=file_name --log=log_file_name And try to restore on a fresh installation: # engine-backup --mode=restore --file=file_name --log=log_file_name --provision-db --restore-permissions Any Idea? Thanks -- Jose Ferradeira http://www.logicworks.pt -------------- next part -------------- An HTML attachment was scrubbed... URL: From didi at redhat.com Wed Feb 28 12:24:50 2018 From: didi at redhat.com (Yedidyah Bar David) Date: Wed, 28 Feb 2018 14:24:50 +0200 Subject: [ovirt-users] Backup & Restore In-Reply-To: <1044341911.20329570.1519819859027.JavaMail.zimbra@logicworks.pt> References: <1044341911.20329570.1519819859027.JavaMail.zimbra@logicworks.pt> Message-ID: On Wed, Feb 28, 2018 at 2:10 PM, wrote: > Hi, > > I'm testing backup & restore on Ovirt 4.2. > I follow this doc > https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/ > Try to restore to a fresh installation but always get this error message: > > restore-permissions > Preparing to restore: > - Unpacking file 'back_file' > Restoring: > - Files > Provisioning PostgreSQL users/databases: > - user 'engine', database 'engine' > Restoring: > FATAL: Can't connect to database 'ovirt_engine_history'. Please see > '/usr/bin/engine-backup --help'. > > On the live engine I run # engine-backup --scope=all --mode=backup > --file=file_name --log=log_file_name > > And try to restore on a fresh installation: > # engine-backup --mode=restore --file=file_name --log=log_file_name > --provision-db --restore-permissions > > Any Idea? Please try adding to restore command '--providion-dwh-db'. Thanks. -- Didi From suporte at logicworks.pt Wed Feb 28 12:45:04 2018 From: suporte at logicworks.pt (suporte at logicworks.pt) Date: Wed, 28 Feb 2018 12:45:04 +0000 (WET) Subject: [ovirt-users] Backup & Restore In-Reply-To: References: <1044341911.20329570.1519819859027.JavaMail.zimbra@logicworks.pt> Message-ID: <1513265050.20337674.1519821904171.JavaMail.zimbra@logicworks.pt> Still no luck: # engine-backup --mode=restore --file=back_futur --log=log_futur --provision-db --restore-permissions --provision-dwh-db Preparing to restore: - Unpacking file 'back_futur' Restoring: - Files Provisioning PostgreSQL users/databases: - user 'engine', database 'engine' - user 'ovirt_engine_history', database 'ovirt_engine_history' Restoring: - Engine database 'engine' FATAL: Errors while restoring database engine I did a engine-cleanup, try it again but still the same error. De: "Yedidyah Bar David" Para: suporte at logicworks.pt Cc: "ovirt users" Enviadas: Quarta-feira, 28 De Fevereiro de 2018 12:24:50 Assunto: Re: [ovirt-users] Backup & Restore On Wed, Feb 28, 2018 at 2:10 PM, wrote: > Hi, > > I'm testing backup & restore on Ovirt 4.2. > I follow this doc > https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/ > Try to restore to a fresh installation but always get this error message: > > restore-permissions > Preparing to restore: > - Unpacking file 'back_file' > Restoring: > - Files > Provisioning PostgreSQL users/databases: > - user 'engine', database 'engine' > Restoring: > FATAL: Can't connect to database 'ovirt_engine_history'. Please see > '/usr/bin/engine-backup --help'. > > On the live engine I run # engine-backup --scope=all --mode=backup > --file=file_name --log=log_file_name > > And try to restore on a fresh installation: > # engine-backup --mode=restore --file=file_name --log=log_file_name > --provision-db --restore-permissions > > Any Idea? Please try adding to restore command '--providion-dwh-db'. Thanks. -- Didi -------------- next part -------------- An HTML attachment was scrubbed... URL: From anastasiya.ruzhanskaya at frtk.ru Wed Feb 28 12:13:47 2018 From: anastasiya.ruzhanskaya at frtk.ru (Anastasiya Ruzhanskaya) Date: Wed, 28 Feb 2018 13:13:47 +0100 Subject: [ovirt-users] Installing on ubuntu Message-ID: Hello! I am new to oVirt. I want to install it on Ubuntu ( oVirt-Engine) , as I do this for testing for my diploma at university and don't have too much space on my computer for hundreds of VMs. What is the status now? I looked on what is written on the site, is it still true and it can be done through all the hacks listed there? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahadas at redhat.com Wed Feb 28 14:12:02 2018 From: ahadas at redhat.com (Arik Hadas) Date: Wed, 28 Feb 2018 16:12:02 +0200 Subject: [ovirt-users] After upgrade to 4.2 some VM won't start In-Reply-To: <46e3d524-1bcc-df98-ebb4-5295c11bcc18@unibas.ch> References: <46e3d524-1bcc-df98-ebb4-5295c11bcc18@unibas.ch> Message-ID: On Tue, Feb 27, 2018 at 11:04 AM, Ars?ne Gschwind wrote: > Hi, > > I would like investigate what went wrong during the Cluster Compatibility > update on running VMs, for sure the workaround by creating new VM and > attaching disk works great but i think it would be interesting to know what > went wrong. > > I've tried to find a way to create a dump of the VM config to be able to > make a diff between old and new one to see what is different but without > any luck so far... > > Any idea how to create such a dump? > Can you please provide the output of the following query in your database: select type, device, address, alias, is_managed, is_plugged from vm_device where vm_id in (select vm_guid from vm_static where vm_name=''); where is the name of one of the VMs you can't start because of that NPE? > Thanks for any help. > > rgds, > > Arsene > > On 02/24/2018 09:03 AM, Ars?ne Gschwind wrote: > > When creating an identical VM and attaching the one disk it will start and > run perfectly. It seems that during the Cluster Compatibility Update > something doesn't work right on running VM, this only happens on running > VMs and I could reproduce it. > > Is there a way to do some kind of diff between the new and the old VM > settings to find out what may be different? > > Thanks, > Arsene > > On 02/23/2018 08:14 PM, Ars?ne Gschwind wrote: > > Hi, > > After upgrading cluster compatibility to 4.2 some VM won't start and I'm > unable to figured out why, it throws a java exception. > > I've attached the engine log. > > Thanks for any help/hint. > > rgds, > Arsene > -- > > *Ars?ne Gschwind* > Fa. Sapify AG im Auftrag der Universit?t Basel > IT Services > Klingelbergstr. 70 | CH-4056 Basel | Switzerland > Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063> | http://its.unibas.ch > ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 > <+41%2061%20267%2014%2011> > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > > *Ars?ne Gschwind* > Fa. Sapify AG im Auftrag der Universit?t Basel > IT Services > Klingelbergstr. 70 | CH-4056 Basel | Switzerland > Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063> | http://its.unibas.ch > ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 > <+41%2061%20267%2014%2011> > > > _______________________________________________ > Users mailing listUsers at ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users > > > -- > > *Ars?ne Gschwind* > Fa. Sapify AG im Auftrag der Universit?t Basel > IT Services > Klingelbergstr. 70 | CH-4056 Basel | Switzerland > Tel. +41 79 449 25 63 <+41%2079%20449%2025%2063> | http://its.unibas.ch > ITS-ServiceDesk: support-its at unibas.ch | +41 61 267 14 11 > <+41%2061%20267%2014%2011> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mzamazal at redhat.com Wed Feb 28 14:29:39 2018 From: mzamazal at redhat.com (Milan Zamazal) Date: Wed, 28 Feb 2018 15:29:39 +0100 Subject: [ovirt-users] VM paused rather than migrate to another hosts In-Reply-To: (Terry hey's message of "Wed, 28 Feb 2018 15:19:34 +0800") References: Message-ID: <87r2p5w5kc.fsf@redhat.com> Terry hey writes: > I am testing iSCSI bonding failover test on oVirt, but i observed that VM > were paused and did not migrate to another host. Please see the details as > follows. > > I have two hypervisors. Since they are running iLO 5 and oVirt 4.2 cannot > support iLO 5, thus i cannot setup power management. > > For the cluster setting, I set "Migrate Virtual Machines" under the > Migration Policy. > > For each hypervisor, I bonded two iSCSI interface as bond 1. > > I created one Virtual machine and enable high availability on it. > Also, I created one Virtual machine and did not enable high availability on > it. > > When i shutdown one of the iSCSI interface, nothing happened. > But when i shutdown both iSCSI interface, VM in that hosts were paused and > did not migrate to another hosts. Is this behavior normal or i miss > something? A paused VM can't be migrated, since there are no guarantees about the storage state. As the VMs were paused under erroneous (rather than controlled such as putting the host into maintenance) situation, migration policy can't help here. But highly available VMs can be restarted on another host automatically. Do you have VM lease enabled for the highly available VM in High Availability settings? With a lease, Engine should be able to restart the VM elsewhere after a while, without it Engine can't do that since there is danger of resuming the VM on the original host, resulting in multiple instances of the same VM running at the same time. VMs without high availability must be restarted manually (unless storage domain becomes available again). HTH, Milan From ykaul at redhat.com Wed Feb 28 14:44:03 2018 From: ykaul at redhat.com (Yaniv Kaul) Date: Wed, 28 Feb 2018 16:44:03 +0200 Subject: [ovirt-users] API endpoint for a VM to fetch metadata about itself In-Reply-To: References: <1e774133-bc56-544f-cf49-71620571a128@redhat.com> Message-ID: On Tue, Feb 27, 2018 at 3:04 AM, Geoff Sweet wrote: > OK, that's a great place for me to start. However the problem is that all > my post-install tooling is now running on a VM that knows nothing about > itself (having been installed via pxe and kickstart) like it's {vm_id}. > Can the API be used to query for a VM and it's attributes based on > something like a MAC address or the IP itself? > If you want its ID, you can get it via dmidecode: dmidecode |grep UUID Y. > > -Geoff > > On Sun, Feb 25, 2018 at 11:05 PM, Ondra Machacek > wrote: > >> We don't have any such resource. We have those information in different >> places of the API. For example to find the information about devices of >> the VM, like network device information (IP address, MAC, etc), you can >> query: >> >> /ovirt-engine/api/vms/{vm_id}/reporteddevices >> >> The FQDN is listed right in the basic information of the VM quering the >> VM itself: >> >> /ovirt-engine/api/vms/{vm_id} >> >> You can find all the information about specific attributes returned by >> the API here in the documentation: >> >> http://ovirt.github.io/ovirt-engine-api-model/4.2/#types/vm >> >> On 02/25/2018 03:13 AM, Geoff Sweet wrote: >> >>> Is there an API endpoint that VM's can query to discover it's oVirt >>> metadata? Something similar to AWS's http://169.254.169.254/latest/ >>> meta-data/ query in EC2? I'm >>> trying to stitch a lot of automation workflow together and so far I have >>> had great luck with oVirt. But the next small hurdle is to figure out how >>> all the post-install setup stuff can figure out who the VM is so it can the >>> appropriate configurations. >>> >>> Thanks! >>> -Geoff >>> >>> >>> _______________________________________________ >>> Users mailing list >>> Users at ovirt.org >>> http://lists.ovirt.org/mailman/listinfo/users >>> >>> > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From suporte at logicworks.pt Wed Feb 28 14:44:39 2018 From: suporte at logicworks.pt (suporte at logicworks.pt) Date: Wed, 28 Feb 2018 14:44:39 +0000 (WET) Subject: [ovirt-users] Backup & Restore In-Reply-To: <1513265050.20337674.1519821904171.JavaMail.zimbra@logicworks.pt> References: <1044341911.20329570.1519819859027.JavaMail.zimbra@logicworks.pt> <1513265050.20337674.1519821904171.JavaMail.zimbra@logicworks.pt> Message-ID: <1229669542.20349654.1519829079176.JavaMail.zimbra@logicworks.pt> If I run # engine-backup --mode=restore --file=back_futur --log=log_futur --provision-db --restore-permissions --provision-dwh-db --log=/root/rest-log to create a log, I found these errors: 2018-02-28 14:36:31 6339: pg_cmd running: psql -w -U ovirt_engine_history -h localhost -p 5432 ovirt_engine_history -t -c show lc_messages 2018-02-28 14:36:31 6339: pg_cmd running: pg_dump -w -U ovirt_engine_history -h localhost -p 5432 ovirt_engine_history -s 2018-02-28 14:36:31 6339: OUTPUT: - Engine database 'engine' 2018-02-28 14:36:31 6339: Restoring engine database backup at /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db 2018-02-28 14:36:31 6339: restoreDB: backupfile /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db user engine host localhost port 5432 database engine orig_user compressor format custom jobsnum 2 2018-02-28 14:36:31 6339: pg_cmd running: pg_restore -w -U engine -h localhost -p 5432 -d engine -j 2 /tmp/engine-backup.VVkcNuYAkV/db/engine_backup.db pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC entry 7314; 0 0 COMMENT EXTENSION plpgsql pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of extension plpgsql Command was: COMMENT ON EXTENSION plpgsql IS 'PL/pgSQL procedural language'; pg_restore: [archiver (db)] Error from TOC entry 693; 1255 211334 FUNCTION uuid_generate_v1() engine pg_restore: [archiver (db)] could not execute query: ERROR: function "uuid_generate_v1" already exists with same argument types Command was: CREATE FUNCTION uuid_generate_v1() RETURNS uuid LANGUAGE plpgsql STABLE AS ' DECLARE v_val BIGINT; v_4_1_par... pg_restore: [archiver (db)] could not execute query: ERROR: must be owner of function uuid_generate_v1 Command was: ALTER FUNCTION public.uuid_generate_v1() OWNER TO engine; pg_restore: WARNING: column "user_role_title" has type "unknown" DETAIL: Proceeding with relation creation anyway. pg_restore: WARNING: no privileges could be revoked for "public" pg_restore: WARNING: no privileges could be revoked for "public" pg_restore: WARNING: no privileges were granted for "public" pg_restore: WARNING: no privileges were granted for "public" WARNING: errors ignored on restore: 3 2018-02-28 14:37:23 6339: FATAL: Errors while restoring database engine De: suporte at logicworks.pt Para: "Yedidyah Bar David" Cc: "ovirt users" Enviadas: Quarta-feira, 28 De Fevereiro de 2018 12:45:04 Assunto: Re: [ovirt-users] Backup & Restore Still no luck: # engine-backup --mode=restore --file=back_futur --log=log_futur --provision-db --restore-permissions --provision-dwh-db Preparing to restore: - Unpacking file 'back_futur' Restoring: - Files Provisioning PostgreSQL users/databases: - user 'engine', database 'engine' - user 'ovirt_engine_history', database 'ovirt_engine_history' Restoring: - Engine database 'engine' FATAL: Errors while restoring database engine I did a engine-cleanup, try it again but still the same error. De: "Yedidyah Bar David" Para: suporte at logicworks.pt Cc: "ovirt users" Enviadas: Quarta-feira, 28 De Fevereiro de 2018 12:24:50 Assunto: Re: [ovirt-users] Backup & Restore On Wed, Feb 28, 2018 at 2:10 PM, wrote: > Hi, > > I'm testing backup & restore on Ovirt 4.2. > I follow this doc > https://www.ovirt.org/documentation/admin-guide/chap-Backups_and_Migration/ > Try to restore to a fresh installation but always get this error message: > > restore-permissions > Preparing to restore: > - Unpacking file 'back_file' > Restoring: > - Files > Provisioning PostgreSQL users/databases: > - user 'engine', database 'engine' > Restoring: > FATAL: Can't connect to database 'ovirt_engine_history'. Please see > '/usr/bin/engine-backup --help'. > > On the live engine I run # engine-backup --scope=all --mode=backup > --file=file_name --log=log_file_name > > And try to restore on a fresh installation: > # engine-backup --mode=restore --file=file_name --log=log_file_name > --provision-db --restore-permissions > > Any Idea? Please try adding to restore command '--providion-dwh-db'. Thanks. -- Didi _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbonazzo at redhat.com Wed Feb 28 14:48:05 2018 From: sbonazzo at redhat.com (Sandro Bonazzola) Date: Wed, 28 Feb 2018 15:48:05 +0100 Subject: [ovirt-users] Installing on ubuntu In-Reply-To: References: Message-ID: 2018-02-28 13:13 GMT+01:00 Anastasiya Ruzhanskaya < anastasiya.ruzhanskaya at frtk.ru>: > Hello! > I am new to oVirt. I want to install it on Ubuntu ( oVirt-Engine) , as I > do this for testing for my diploma at university and don't have too much > space on my computer for hundreds of VMs. > What is the status now? I looked on what is written on the site, is it > still true and it can be done through all the hacks listed there? > Hi and welcome to oVirt community! I haven't tested the Ubuntu installation procedure recently but I've seen people asking about it on #ovirt IRC channel. If you try to install oVirt on Ubuntu please help updating the website with your experience. > Thank you. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -- SANDRO BONAZZOLA ASSOCIATE MANAGER, SOFTWARE ENGINEERING, EMEA ENG VIRTUALIZATION R&D Red Hat EMEA TRIED. TESTED. TRUSTED. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Wed Feb 28 14:51:34 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Wed, 28 Feb 2018 14:51:34 +0000 Subject: [ovirt-users] Cannot activate storage domain Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE500D1@fabamailserver.fabagl.fabasoft.com> Hi all, we run a small oVirt installation that we also use for automated testing (automatically creating, dropping vms). We got an inactive FC storage domain that we cannot activate any more. We see several events at that time starting with: VM perftest-c17 is down with error. Exit message: Unable to get volume size for domain b83c159c-4ad6-4613-ba16-bab95ccd10c0 volume 686376c1-4be1-44c3-89a3-0a8addc8fdf2. Trying to activate the strorage domain results in the following alert event for each host: VDSM command GetVGInfoVDS failed: Volume Group does not exist: (u'vg_uuid: 813oRe-64r8-mloU-k9G2-LFsS-dXSG-hpN4kf',) And after those messages from all hosts we get: VDSM command ActivateStorageDomainVDS failed: Storage domain does not exist: (u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',) Failed to activate Storage Domain VMHOST_LUN_205 (Data Center Production) by Invalid status on Data Center Production. Setting status to Non Responsive. Storage Pool Manager runs on Host vmhost003.fabagl.fabasoft.com (Address: vmhost003.fabagl.fabasoft.com), Data Center Production. Checking the hosts with multipath -ll we see the LUN without errors. We run oVirt 4.2.1 on CentOS 7.4. Hosts are CentOS 7.4 hosts with oVirt installed using oVirt engine. Hosts are connected to about 30 FC LUNs (8 TB each) on two all-flash storage arrays. Thank you, Simone Bruckner -------------- next part -------------- An HTML attachment was scrubbed... URL: From angel.gonzalez at uam.es Wed Feb 28 15:09:04 2018 From: angel.gonzalez at uam.es (Angel R. Gonzalez) Date: Wed, 28 Feb 2018 16:09:04 +0100 Subject: [ovirt-users] Permission on Vm and User portal In-Reply-To: References: <47c975e531094928a075be4125a5ab33@DR1-XEXCH01-B.eset.corp> Message-ID: <692850c5-66f7-b174-32d8-b21ce7128a39@uam.es> Hi all, I've Ovirt 4.1.2 with an authentication LDAP (openldap). In this tree LDAP, I've two groups, students and teachers. How to add permissions to execute VMs or Pools to these groups without add permissions to each member of these groups. Thanks you in advance. Angel. From mperina at redhat.com Wed Feb 28 15:57:09 2018 From: mperina at redhat.com (Martin Perina) Date: Wed, 28 Feb 2018 16:57:09 +0100 Subject: [ovirt-users] Cannot activate host from maintenance mode In-Reply-To: References: Message-ID: On 28 Feb 2018 10:14 am, "Tal Bar-Or" wrote: Hello, I have Ovirt Version:4.2.1.7-1.el7.centos, I did upgrade according to host indication ,and since then I get the following error when trying to activate host " Cannot activate Host. Host has no unique id. " Could you please share all engine logs with us so we can investigate? Thanks Martin Any idea how to fix this issue, please advice Thanks -- Tal Bar-or _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: From simone.bruckner at fabasoft.com Wed Feb 28 16:30:18 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Wed, 28 Feb 2018 16:30:18 +0000 Subject: [ovirt-users] Cannot activate host from maintenance mode In-Reply-To: References: Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE51639@fabamailserver.fabagl.fabasoft.com> Hi Martin, please find the logs attached. The storage domain got inactive at around 10:42am CET. One other thing to mention is, that all VMs that were running on the inactive storage domain are still available. All the best, Simone Von: users-bounces at ovirt.org [mailto:users-bounces at ovirt.org] Im Auftrag von Martin Perina Gesendet: Mittwoch, 28. Februar 2018 16:57 An: Tal Bar-Or Cc: users Betreff: Re: [ovirt-users] Cannot activate host from maintenance mode On 28 Feb 2018 10:14 am, "Tal Bar-Or" > wrote: Hello, I have Ovirt Version:4.2.1.7-1.el7.centos, I did upgrade according to host indication ,and since then I get the following error when trying to activate host " Cannot activate Host. Host has no unique id. " Could you please share all engine logs with us so we can investigate? Thanks Martin Any idea how to fix this issue, please advice Thanks -- Tal Bar-or _______________________________________________ Users mailing list Users at ovirt.org http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: engine.log.tar.gz Type: application/x-gzip Size: 3015990 bytes Desc: engine.log.tar.gz URL: From simone.bruckner at fabasoft.com Wed Feb 28 17:16:36 2018 From: simone.bruckner at fabasoft.com (Bruckner, Simone) Date: Wed, 28 Feb 2018 17:16:36 +0000 Subject: [ovirt-users] =?iso-8859-1?q?R=FCckruf=3A__Cannot_activate_host_f?= =?iso-8859-1?q?rom_maintenance_mode?= Message-ID: <2CB4E8C8E00E594EA06D4AC427E429920FE52871@fabamailserver.fabagl.fabasoft.com> Bruckner, Simone m?chte die Nachricht "[ovirt-users] Cannot activate host from maintenance mode" zur?ckrufen. From junaid8756 at gmail.com Wed Feb 28 18:14:22 2018 From: junaid8756 at gmail.com (Junaid Jadoon) Date: Wed, 28 Feb 2018 23:14:22 +0500 Subject: [ovirt-users] open source backup solution for ovirt/VMs Message-ID: HI, Can you please suggest me open source backup solution for ovirt Virtual machines. My backup media is FC tape library which directly attached to my ovirt node. I really appreciate you help thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From matonb at ltresources.co.uk Wed Feb 28 19:39:52 2018 From: matonb at ltresources.co.uk (Maton, Brett) Date: Wed, 28 Feb 2018 19:39:52 +0000 Subject: [ovirt-users] open source backup solution for ovirt/VMs In-Reply-To: References: Message-ID: You could look at https://github.com/openbacchus/bacchus to automate ovirt vm backups. You'd still need to do 'something' to move the vm images from the export domain to your tape device though. On 28 February 2018 at 18:14, Junaid Jadoon wrote: > HI, > Can you please suggest me open source backup solution for ovirt Virtual > machines. > > My backup media is FC tape library which directly attached to my ovirt > node. > > I really appreciate you help > > thanks. > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From biholcomb at l1049h.com Wed Feb 28 22:10:59 2018 From: biholcomb at l1049h.com (Brett Holcomb) Date: Wed, 28 Feb 2018 17:10:59 -0500 Subject: [ovirt-users] open source backup solution for ovirt/VMs In-Reply-To: References: Message-ID: <341128da-7338-67fd-2975-c8b50892253e@l1049h.com> I use Bareos for backup.? It is open source. On 02/28/2018 01:14 PM, Junaid Jadoon wrote: > HI, > Can you please suggest me open source backup solution for ovirt > Virtual machines. > > My backup media is FC tape library which directly? attached to my > ovirt node. > > I really appreciate you help > > thanks. > > > _______________________________________________ > Users mailing list > Users at ovirt.org > http://lists.ovirt.org/mailman/listinfo/users -------------- next part -------------- An HTML attachment was scrubbed... URL: