[Users] VM wont restart after some NFS snapshot restore.
by Usman Aslam
I have some VM's that live on NFS share. Basically, I had to revert the VM
disk to a backup from a few days ago. So I powered the VM down, copied over
the following files
039a8482-c267-4051-b1e6-1c1dee49b3d7
039a8482-c267-4051-b1e6-1c1dee49b3d7.lease
039a8482-c267-4051-b1e6-1c1dee49b3d7.meta
8d48505d-846d-49a7-8b50-d972ee051145
8d48505d-846d-49a7-8b50-d972ee051145.lease
8d48505d-846d-49a7-8b50-d972ee051145.meta
and now when I try to power the VM, it complains
2013-Oct-02, 22:02:38
Failed to run VM zabbix-prod-01 (User: admin@internal).
2013-Oct-02, 22:02:38
Failed to run VM zabbix-prod-01 on Host
tss-tusk-ovirt-01-ovirtmgmt.tusk.tufts.edu.
2013-Oct-02, 22:02:38
VM zabbix-prod-01 is down. Exit message: 'truesize'.
Any ideas on how I could resolve this? Perhaps a better way of approaching
the restore on a filesystem level?
I see the following the vsdm.log
Thread-7843::ERROR::2013-10-02
22:02:37,548::vm::716::vm.Vm::(_startUnderlyingVm)
vmId=`8e8764ad-6b4c-48d8-9a19-fa5cf77208ef`::The vm start process failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 678, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/libvirtvm.py", line 1467, in _run
devices = self.buildConfDevices()
File "/usr/share/vdsm/vm.py", line 515, in buildConfDevices
self._normalizeVdsmImg(drv)
File "/usr/share/vdsm/vm.py", line 408, in _normalizeVdsmImg
drv['truesize'] = res['truesize']
KeyError: 'truesize'
Thread-7843::DEBUG::2013-10-02
22:02:37,553::vm::1065::vm.Vm::(setDownStatus)
vmId=`8e8764ad-6b4c-48d8-9a19-fa5cf77208ef`::Changed state to Down:
'truesize'
Any help would be really nice, thanks!
--
Usman
11 years, 1 month
[Users] Metadata broken
by Eduardo Ramos
Hi all!
I'm getting this message on SPM vdsmd.log:
MetaDataSealIsBroken: Meta Data seal is broken (checksum mismatch):
'cksum = 7a7df999bca1c1ac3878433a55d4ce2e2fda671a, computed_cksum =
464c1a075cffe3881375d24242ad490e70a3d8f1'
Anyone could help me solve it?
Thanks.
11 years, 1 month
Re: [Users] oVirt Solaris support
by René Koch (ovido)
On Mon, 2013-09-16 at 16:29 +0100, Dan Kenigsberg wrote:
> On Mon, Sep 16, 2013 at 04:39:47PM +0200, René Koch (ovido) wrote:
> >
> > On Mon, 2013-09-16 at 15:14 +0100, Dan Kenigsberg wrote:
> >
> > > On Mon, Sep 16, 2013 at 12:33:34PM +0200, René Koch (ovido) wrote:
> > > >
> > > > On Fri, 2013-09-06 at 22:26 +0100, Dan Kenigsberg wrote:
> > > >
> > > > > On Fri, Sep 06, 2013 at 06:23:42PM +0200, René Koch (ovido) wrote:
> > > > > > Hi,
> > > > > >
> > > > > > I want to start the discussion about Solaris support on oVirt again, as
> > > > > > there was no solution for it yet.
> > > > > >
> > > > > > On my oVirt 3.2.2 environment I installed Solaris 11 U1 with the
> > > > > > following specs:
> > > > > > * Operating System: Other
> > > > > > * nic1: rtl8139
> > > > > > * Disk1: IDE (Thin Provision)
> > > > > > * Host: CentOS 6.4 with qemu-kvm-0.12.1.2-2.355.0.1.el6.centos.7.x86_64
> > > > > >
> > > > > > These are the same settings as on my RHEL 6.4 KVM host (except I can
> > > > > > choose Solaris 10 as OS in virt-manager), which has KVM version:
> > > > > > qemu-kvm-rhev-0.12.1.2-2.295.el6_3.2.x86_64 (I wanted to use this host
> > > > > > as a RHEV host, so the qemu-kvm-rhev package is installed in case you
> > > > > > wounder)...
> > > > > >
> > > > > > What's working:
> > > > > > * OS installation on IDE disk
> > > > > > * Bringing up network interface
> > > > > >
> > > > > > What's not working on oVirt:
> > > > > > * Network connections - on RHEL 6.4 with plain libvirt/kvm this is
> > > > > > working...
> > > > > >
> > > > > >
> > > > > > I can see the mac address on my CentOS host, but can't ping the Solaris
> > > > > > vm:
> > > > > >
> > > > > > # brctl showmacs ovirtmgmt | egrep '00:99:4a:00:64:83|port'
> > > > > > port no mac addr is local? ageing timer
> > > > > > 2 00:99:4a:00:64:83 no 10.72
> > > > > >
> > > > > > # arp -an | grep '00:99:4a:00:64:83'
> > > > > > ? (10.0.100.123) at 00:99:4a:00:64:83 [ether] on ovirtmgmt
> > > > > >
> > > > > > When using tcpdump on the vnet interface which belongs to the Solaris vm
> > > > > > (ip 10.0.100.123) I can see ARP requests from the vm for ip address of
> > > > > > my CentOS host (10.0.100.42) but no response to it. Same when pinging
> > > > > > other ips in this network:
> > > > > >
> > > > > > # tcpdump -n -i vnet2
> > > > > > tcpdump: WARNING: vnet2: no IPv4 address assigned
> > > > > > tcpdump: verbose output suppressed, use -v or -vv for full protocol
> > > > > > decode
> > > > > > listening on vnet2, link-type EN10MB (Ethernet), capture size 65535
> > > > > > bytes
> > > > > > 18:15:35.987868 ARP, Request who-has 10.0.100.42 (Broadcast) tell
> > > > > > 10.0.100.123, length 46
> > > > > > 18:15:36.487399 ARP, Request who-has 10.0.100.42 (Broadcast) tell
> > > > > > 10.0.100.123, length 46
> > > > > > 18:15:36.987536 ARP, Request who-has 10.0.100.42 (Broadcast) tell
> > > > > > 10.0.100.123, length 46
> > > > > >
> > > > > > I also compared the qemu-kvm process list on the KVM with the oVirt
> > > > > > machine and can't see much differences except that oVirt has more
> > > > > > information like smbios....
> > > > > > oVirt host:
> > > > > > /usr/libexec/qemu-kvm
> > > > > > <snip>
> > > > > > -netdev tap,fd=27,id=hostnet0
> > > > > > -device
> > > > > > rtl8139,netdev=hostnet0,id=net0,mac=00:99:4a:00:64:83,bus=pci.0,addr=0x3
> > > > > >
> > > > > > RHEL KVM host:
> > > > > > /usr/libexec/qemu-kvm
> > > > > > <snip>
> > > > > > -netdev tap,fd=32,id=hostnet0
> > > > > > -device
> > > > > > rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:51:c2:97,bus=pci.0,addr=0x3
> > > > > >
> > > > > > Any suggestions on how to troubleshoot / get Solaris networking running
> > > > > > is welcome.
> > > > > > Changing the interface to e1000 doesn't work either.
> > > >
> > > >
> > > >
> > > > Thanks a lot for your answer and sorry for my late response - I was sick
> > > > leaving last week.
> > > >
> > > >
> > > > >
> > > > > - Would you share the output of your `brctl show`?
> > > >
> > > >
> > > > # brctl show
> > > > bridge name bridge id STP enabled interfaces
> > > > ;vdsmdummy; 8000.000000000000 no
> > > > dmz 8000.0025901ad140 no eth0.200
> > > > ovirtmgmt 8000.0025901ad140 no eth0.100
> > > > vnet1
> > > > vnet2
> > > > setup 8000.0025901ad140 no eth0.500
> > > > test2 8000.0025901ad140 no eth0.602
> > > > vnet0
> > > >
> > > > vnet1 is a Debian 7 vm which has a perfectly working network.
> > > > vnet2 is the Solaris 11 vm which can't communicate over the network.
> > > >
> > > >
> > > > > - Particulalry, does the host device with 10.0.100.42 sit on the same
> > > > > bridge as your Solaris VM?
> > > >
> > > >
> > > > 10.0.100.42 is the ip of the oVirt host and it's ip is configured on
> > > > this bridge (and reachable from the Debian 7 vm):
> > > >
> > > > # ip a | grep ovirt
> > > > 5: ovirtmgmt: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
> > > > state UNKNOWN
> > > > inet 10.0.100.42/24 brd 10.0.100.255 scope global ovirtmgmt
> > > >
> > > >
> > > > > - Do you see the who-has packet when sniffing the bridge device?
> > > >
> > > >
> > > >
> > > > Yes, I can see them:
> > > > 12:24:16.311218 ARP, Request who-has 10.0.100.42 (Broadcast) tell
> > > > 10.0.100.123, length 46
> > > > 12:24:16.311246 ARP, Reply 10.0.100.42 is-at 00:25:90:1a:d1:40, length
> > > > 28
> > > >
> > > > I can also see the MAC of the Solaris vm in arp cache of oVirt host:
> > > > # arp -an
> > > > ? (10.0.100.123) at 00:99:4a:00:64:83 [ether] on ovirtmgmt
> > > >
> > > > But I can't see the MAC address of oVirt host (10.0.100.42) in arp table
> > > > of the Solaris 11 vm. To be more clear: I can't see any MAC address in
> > > > ARP table of Solaris vm except the MAC of the vm itself.
> > > >
> > > >
> > > > > - Have yout tried guest-to-guest communication (over the same bridge)?
> > > >
> > > >
> > > > I can communicate from the Debian 7 vm to all hosts in my network, but I
> > > > can't reach the Solaris 11 vm.
> > > > The network of the oVirt setup is working fine for Windows and various
> > > > Linux guests, but not for Solaris.
> > > >
> > > > Btw, excatly same behavior on RHEV 3.2 with Solaris 10/11 - Windows and
> > > > Linux is working as expected, but no networking for Solaris guests...
> > >
> > > I'm afraid I'm out of ideas. Is the kernel the same on both hosts?
> > > Would you reply with the complete qemu commandline? Maybe Michael could
> > > sport the difference between the working plain-KVM host and the
> > > non-working oVirt host.
> >
> >
> >
> > The kernel is not the same:
> > CentOS 6 (oVirt): 2.6.32-358.18.1.el6.x86_64
> > RHEL 6 (KVM): 2.6.32-358.11.1.el6.x86_64
> > But I think it's not kernel related as I successfully ran Solaris on the
> > KVM server 1 year ago (so kernel was an old RHEL 6.2 or 6.3 kernel) and
> > didn't worked this time on oVirt 3.1, too (hypervisor was CentOS 6.2 or
> > 6.3 using dreyou's-repository, now I'm running CentOS 6.4 with official
> > oVirt repository). So many kernel upgrades later Solaris still has a
> > working network on plain KVM but not on oVirt 3.2 or RHEV 3.2.
> >
> > Here's the complete qemu cmd:
> >
> > RHEL 6.4 (plain KVM - Solaris networking is fine):
> > /usr/libexec/qemu-kvm -name solaris11 -S -M rhel6.3.0 -enable-kvm -m
> > 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid
> > e7d3b9a4-8605-dc4f-9092-41d68dd83735 -nodefconfig -nodefaults -chardev
> > socket,id=charmonitor,path=/var/lib/libvirt/qemu/solaris11.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/libvirt/images/solaris11.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev tap,fd=40,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:f0:27:d3,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0 -vnc 127.0.0.1:13 -vga cirrus -device intel-hda,id=sound0,bus=pci.0,addr=0x4 -device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
> >
> > CentOS 6.4 (oVirt - Solaris can't reach other nodes):
> > /usr/libexec/qemu-kvm -name solaris11 -S -M rhel6.3.0 -cpu Nehalem
> > -enable-kvm -m 1024 -smp 1,sockets=1,cores=1,threads=1 -uuid
> > cbaa0cf5-d9e6-40e4-a269-9dc495f4c0d5 -smbios
> > type=1,manufacturer=oVirt,product=oVirt
> > Node,version=6-4.el6.centos.10,serial=54443858-4E54-2500-901A-0025901AD140,uuid=cbaa0cf5-d9e6-40e4-a269-9dc495f4c0d5,sku=solaris11 -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/solaris11.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=2013-09-06T15:08:22,driftfix=slew -no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/rhev/data-center/484e62d7-7a01-4b5e-aec8-59d366100281/2055b80d-d59d-4fe6-80ec-5499674feed5/images/0786a273-780a-4f58-acc4-9b123953bde8/0c07f1f9-f9c0-4ac0-bfec-bee7a1c35a0b,if=none,id=drive-ide0-0-0,format=raw,serial=0786a273-780a-4f58-acc4-9b123953bde8,cache=none,werror=stop,rerror=stop,aio=threads -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive file=/var/run/vdsm/payload/cbaa0cf5-d9e6-40e4-a269-9dc495f4c0d5.f3b4e68b93e9d35d3f17328fae2e178e.img,if=none,media=cdrom,id=drive-ide0-1-1,readonly=on,format=raw,serial= -device ide-drive,bus=ide.1,unit=1,drive=drive-ide0-1-1,id=ide0-1-1 -netdev tap,fd=27,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=00:99:4a:00:64:83,bus=pci.0,addr=0x3 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/solaris11.com.redhat.rhevm.vdsm,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm -chardev socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/solaris11.org.qemu.guest_agent.0,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 -chardev spicevmc,id=charchannel2,name=vdagent -device virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0 -spice port=5902,tls-port=5903,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on -k en-us -vga qxl -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
> >
>
> I don't think it's it, but maybe if you install and enabled the macspoof
> hook http://www.ovirt.org/Vdsm_Hooks#VM-level_hooks you could remove
> ebtables rules that might introduce some difference between the two
> hosts.
Thanks for the hint, but either I didn't use it right or it doesn't
solve the issue.
- Downloaded macspoof hook from github (as no package is available for
CentOS/RHEL in 3.2 repository)
- updated UserDefinedVMProperties and restarted engine
- set custom property macspoof = true
Btw, ebtables seems to be disabled/accept all anyway:
# ebtables -L
Bridge table: filter
Bridge chain: INPUT, entries: 0, policy: ACCEPT
Bridge chain: FORWARD, entries: 0, policy: ACCEPT
Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
I also did something really nasty and created a Solaris 11 vm with
virt-manager on this hypervisor (it's not a production machine so I took
the risk to interfere with oVirt). And guess what - networking of this
Solaris 11 guest is fine on ovirtmgmt bridge.
So it seems to me that virt-manager is doing some magic for Solaris
guests oVirt doesn't...
Regards,
René
11 years, 1 month
[Users] Monitoring UI-Plugin 0.1 released
by René Koch (ovido)
I'm happy to announce version 0.1 of Monitoring UI-Plugin for oVirt and
RHEV.
This UI-Plugin for oVirt and RHEV integrates an existing Nagios or
Icinga monitoring solution into oVirt and RHEV and displays detailed
service status information for data centers, clusters, hosts, storage
domains, virtual machines and pools including performance graphs.
Download this plugin from:
https://github.com/monitoring-ui-plugin/development/releases/monitoring-u...
For further information on how to install this plugin visit:
https://github.com/monitoring-ui-plugin/development/wiki/Installation-Doc...
A detailed usage documentation can be found here:
https://github.com/monitoring-ui-plugin/development/wiki/Usage-Documentation
Changelog:
New Features:
* Rewrite of Backend-Code
* Configuration files for Monitoring UI-Plugin
* Mapping of hosts/vms when names differ in oVirt/RHEV and
Nagios/Icinga
* New data backend provider infrastructure supporting
* NDOUtils (MySQL-Database)
* IDOUtils (MySQL-Database)
* IDOUtils (PostgreSQL-Database)
* mk-livestatus (Unix-socket)
* mk-livestatus (TCP-connection)
* New graph backend provider infrastructure supporting
* PNP4Nagios
* Service check results, details and graphs for
* Data Centers
* Clusters
* Storage Domains
* Pools
* Refresh and automatic refresh of service check results
* Resizable Tables
* Logging to logfile
* Installation with autotools support
* Perl pod documentation
* FastCGI support
* Cached database connections
* JavaScript templates
* jQuery and jQueryUI upgrades
* Supported virtualization platforms
* RHEV 3.2
* oVirt 3.2
* oVirt 3.3
* experimental SELinux support
* RPMs for Red Hat Enterprise Linux 6 and CentOS 6
Bugs fixed:
* CSS issues in Internet Explorer and Google Chrome
Infrastructure:
* Renamed to Monitoring UI-Plugin as it supports RHEV now, too
* Moved project to github
* Project URL: https://github.com/monitoring-ui-plugin/development
* Project Wiki:
https://github.com/monitoring-ui-plugin/development/wiki
If you have any questions or ideas, please drop me an email:
r.koch(a)ovido.at.
Thank you for using Monitoring UI-plugin.
--
Best Regards
René Koch
Senior Solution Architect
============================================
ovido gmbh - open source solutins
Brünner Straße 163, A-1210 Vienna
Phone: +43 720 / 530 670
Mobile: +43 660 / 512 21 31
E-Mail: r.koch(a)ovido.at
============================================
11 years, 1 month
[Users] oVirt 3.3 gluster volume active but unable to activate domain
by Gianluca Cecchi
One engine with f19 and two nodes with f19.
All with ovirt stable repo for f19.
DC defined as GlusterFS
The volume is ok, but I can't activate the domain.
Relevant logs when I clich activate are below
On engine:
2013-10-03 23:05:10,332 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServicesListVDSCommand]
(pool-6-thread-50) START, GlusterServicesListVDSCommand(HostName =
f18ovn03, HostId = b67bcfd4-f868-49d5-8704-4936ee922249), log id:
5704c54f
2013-10-03 23:05:12,121 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler_Worker-83) hostFromVds::selectedVds -
f18ovn01, spmStatus Free, storage pool Gluster
2013-10-03 23:05:12,142 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler_Worker-83) SpmStatus on vds
80188ccc-83b2-4bc8-9385-8d07f7458a3c: Free
2013-10-03 23:05:12,144 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler_Worker-83) starting spm on vds f18ovn01,
storage pool Gluster, prevId 1, LVER 0
2013-10-03 23:05:12,148 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServicesListVDSCommand]
(pool-6-thread-46) FINISH, GlusterServicesListVDSCommand, return:
[org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@955283ba,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@1ef87397,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@c1b996b6,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@30199726,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@606c4879,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@2b860d38,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@f69fd1f7],
log id: 4a1b4d33
2013-10-03 23:05:12,159 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(DefaultQuartzScheduler_Worker-83) START, SpmStartVDSCommand(HostName
= f18ovn01, HostId = 80188ccc-83b2-4bc8-9385-8d07f7458a3c,
storagePoolId = eb679feb-4da2-4fd0-a185-abbe459ffa70, prevId=1,
prevLVER=0, storagePoolFormatType=V3, recoveryMode=Manual,
SCSIFencing=false), log id: 62f11f2d
2013-10-03 23:05:12,169 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(DefaultQuartzScheduler_Worker-83) spmStart polling started: taskId =
ab9f2f84-f89b-44e9-b508-a904420635f4
2013-10-03 23:05:12,232 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterServicesListVDSCommand]
(pool-6-thread-50) FINISH, GlusterServicesListVDSCommand, return:
[org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@b624c19b,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@3fcab178,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@e28bd497,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@50ebd507,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@813e865a,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@4c584b19,
org.ovirt.engine.core.common.businessentities.gluster.GlusterServerService@17720fd8],
log id: 5704c54f
2013-10-03 23:05:12,512 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-6) START,
GlusterVolumesListVDSCommand(HostName = f18ovn01, HostId =
80188ccc-83b2-4bc8-9385-8d07f7458a3c), log id: 39a3f45d
2013-10-03 23:05:12,595 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(DefaultQuartzScheduler_Worker-6) FINISH,
GlusterVolumesListVDSCommand, return:
{97873e57-0cc2-4740-ae38-186a8dd94718=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@a82da199,
d055b38c-2754-4e53-af5c-69cc0b8bf31c=org.ovirt.engine.core.common.businessentities.gluster.GlusterVolumeEntity@ef0c0180},
log id: 39a3f45d
2013-10-03 23:05:14,182 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
(DefaultQuartzScheduler_Worker-83) Failed in HSMGetTaskStatusVDS
method
2013-10-03 23:05:14,184 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetTaskStatusVDSCommand]
(DefaultQuartzScheduler_Worker-83) Error code AcquireHostIdFailure and
error message VDSGenericException: VDSErrorException: Failed to
HSMGetTaskStatusVDS, error = Cannot acquire host id
2013-10-03 23:05:14,186 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(DefaultQuartzScheduler_Worker-83) spmStart polling ended: taskId =
ab9f2f84-f89b-44e9-b508-a904420635f4 task status = finished
2013-10-03 23:05:14,188 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(DefaultQuartzScheduler_Worker-83) Start SPM Task failed - result:
cleanSuccess, message: VDSGenericException: VDSErrorException: Failed
to HSMGetTaskStatusVDS, error = Cannot acquire host id
2013-10-03 23:05:14,214 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(DefaultQuartzScheduler_Worker-83) spmStart polling ended, spm status:
Free
2013-10-03 23:05:14,228 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(DefaultQuartzScheduler_Worker-83) START,
HSMClearTaskVDSCommand(HostName = f18ovn01, HostId =
80188ccc-83b2-4bc8-9385-8d07f7458a3c,
taskId=ab9f2f84-f89b-44e9-b508-a904420635f4), log id: 6e5df07f
2013-10-03 23:05:14,235 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
(DefaultQuartzScheduler_Worker-83) FINISH, HSMClearTaskVDSCommand, log
id: 6e5df07f
2013-10-03 23:05:14,237 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(DefaultQuartzScheduler_Worker-83) FINISH, SpmStartVDSCommand, return:
org.ovirt.engine.core.common.businessentities.SpmStatusResult@5191523,
log id: 62f11f2d
2013-10-03 23:05:14,239 INFO
[org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
(DefaultQuartzScheduler_Worker-83) Running command:
SetStoragePoolStatusCommand internal: true. Entities affected : ID:
eb679feb-4da2-4fd0-a185-abbe459ffa70 Type: StoragePool
2013-10-03 23:05:14,258 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler_Worker-83)
IrsBroker::Failed::GetStoragePoolInfoVDS due to:
IrsSpmStartFailedException: IRSGenericException: IRSErrorException:
SpmStart failed
2013-10-03 23:05:14,307 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler_Worker-83) Irs placed on server
80188ccc-83b2-4bc8-9385-8d07f7458a3c failed. Proceed Failover
2013-10-03 23:05:14,341 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler_Worker-83) hostFromVds::selectedVds -
f18ovn03, spmStatus Free, storage pool Gluster
2013-10-03 23:05:14,345 ERROR
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler_Worker-83) SPM Init: could not find reported
vds or not up - pool:Gluster vds_spm_id: 1
2013-10-03 23:05:14,360 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(DefaultQuartzScheduler_Worker-83) SPM selection - vds seems as spm
f18ovn01
2013-10-03 23:05:14,375 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStopVDSCommand]
(DefaultQuartzScheduler_Worker-83) START, SpmStopVDSCommand(HostName =
f18ovn01, HostId = 80188ccc-83b2-4bc8-9385-8d07f7458a3c, storagePoolId
= eb679feb-4da2-4fd0-a185-abbe459ffa70), log id: 59e35f64
2013-10-03 23:05:14,384 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand]
(DefaultQuartzScheduler_Worker-83) Command
org.ovirt.engine.core.vdsbroker.vdsbroker.HSMGetAllTasksStatusesVDSCommand
return value
On first vdsm host:
Thread-22567::DEBUG::2013-10-03
23:05:00,996::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll
requests {}
Thread-22567::DEBUG::2013-10-03
23:05:00,996::task::974::TaskManager.Task::(_decref)
Task=`b00f597e-e73c-49a1-9bb5-14a
5d800c32e`::ref 0 aborting False
4a4a72d1-b1db-4498-aac1-549c986fde24::ERROR::2013-10-03
23:05:01,042::task::850::TaskManager.Task::(_setError)
Task=`4a4a72d1-b1db-4498-aac1-549c986fde24`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 857, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/storage/task.py", line 318, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/share/vdsm/storage/sp.py", line 272, in startSpm
self.masterDomain.acquireHostId(self.id)
File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id:
('d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', SanlockException(5, 'Sanlock
lockspace add failure', 'Input/output error'))
On second vdsm host:
Thread-15262::DEBUG::2013-10-03
23:05:23,327::task::1168::TaskManager.Task::(prepare)
Task=`654b1197-b3a0-4cff-9859-6515c5a908f4`::finished: {'taskStatus':
{'code': 0, 'message': 'Task is initializing', 'taskState': 'running',
'taskResult': '', 'taskID': 'f8ada879-f322-4135-9803-b5f8217ac58a'}}
Thread-15262::DEBUG::2013-10-03
23:05:23,327::task::579::TaskManager.Task::(_updateState)
Task=`654b1197-b3a0-4cff-9859-6515c5a908f4`::moving from state
preparing -> state finished
Thread-15262::DEBUG::2013-10-03
23:05:23,327::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-15262::DEBUG::2013-10-03
23:05:23,328::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-15262::DEBUG::2013-10-03
23:05:23,328::task::974::TaskManager.Task::(_decref)
Task=`654b1197-b3a0-4cff-9859-6515c5a908f4`::ref 0 aborting False
f8ada879-f322-4135-9803-b5f8217ac58a::ERROR::2013-10-03
23:05:23,356::task::850::TaskManager.Task::(_setError)
Task=`f8ada879-f322-4135-9803-b5f8217ac58a`::Unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 857, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/storage/task.py", line 318, in run
return self.cmd(*self.argslist, **self.argsdict)
File "/usr/share/vdsm/storage/sp.py", line 272, in startSpm
self.masterDomain.acquireHostId(self.id)
File "/usr/share/vdsm/storage/sd.py", line 458, in acquireHostId
self._clusterLock.acquireHostId(hostId, async)
File "/usr/share/vdsm/storage/clusterlock.py", line 189, in acquireHostId
raise se.AcquireHostIdFailure(self._sdUUID, e)
AcquireHostIdFailure: Cannot acquire host id:
('d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291', SanlockException(5, 'Sanlock
lockspace add failure', 'Input/output error'))
f8ada879-f322-4135-9803-b5f8217ac58a::DEBUG::2013-10-03
23:05:23,358::task::869::TaskManager.Task::(_run)
Task=`f8ada879-f322-4135-9803-b5f8217ac58a`::Task._run:
f8ada879-f322-4135-9803-b5f8217ac58a () {} failed - stopping task
f8ada879-f322-4135-9803-b5f8217ac58a::DEBUG::2013-10-03
23:05:23,358::task::1194::TaskManager.Task::(stop)
Task=`f8ada879-f322-4135-9803-b5f8217ac58a`::stopping in state running
(force False)
ANy hint?
Thanks,
Gianluca
11 years, 1 month
[Users] iSCSI domain
by Jakub Bittner
Hello,
I have to change iSCSI data domain (master) ip address. I am using oVirt
3.3 latest stable and I wonder if there is another (easier) way to
change master domain IP than to export all VMs and import them to newly
created domain with new IP.
Please could you anyone point me how to do that?
Thank you.
11 years, 1 month
[Users] HA in oVirt 3.3
by emitor@gmail.com
Hi!
I'm testing the HA in oVirt 3.3, it should work if i'm running a vm on a
host that suddenly get out of connection? because that's not working for me.
The HA works if i put in maintenance the host where the VM is running, but
if the host get down the vm get in unknown state. That's how it should work?
I would like that the vm automatically change of host if the one that's is
running get out of conectivity, this is possible?
Regards!
--
*Emiliano Tortorella*
+598 98941176
emitor(a)gmail.com
11 years, 1 month
Re: [Users] Migrate existing guest (kvm/libvirtd) into ovirt environment
by René Koch (ovido)
On Thu, 2013-10-03 at 16:06 +0200, lasse lindgren wrote:
> I have been trying to find any good examples about how that could be
> done.
>
> But no luck there.
Here's an example on how to import a vm named "test-solaris10" from a
KVM machine (kvm-server) to the export domain located on server
nfs-server:
$ virt-v2v -ic qemu+ssh://kvm-server/system -o rhev -os
nfs-server:/nfs/exports -of raw -oa sparse -n ovirtmgmt test-solaris10
man virt-v2v explains all the possible options...
>
>
> Is that a supported way?
Yes, it's the way how you should import vms...
>
>
>
> On Thu, Oct 3, 2013 at 3:57 PM, René Koch (ovido) <r.koch(a)ovido.at>
> wrote:
>
> On Thu, 2013-10-03 at 15:53 +0200, lasse lindgren wrote:
> > Hello!
> >
> >
> > I have thougts about if there is a way to take an existing
> vm (kvm)
> > running on BoxA that only uses libvirt and deploy that on
> BoxB,
> > running ovirt (oVirt Engine Version: 3.3.0-4.fc19)
> >
> >
> > The existing vm, is a linux host, and the disk type is qcow2
> >
> > And since it's a working configuration i just wan't do
> move/migrate it
> > to the ovirt envionment.
> >
>
>
> You can use virt-v2v to migrate your existing vm into oVirt.
> Just create an export domain in oVirt, shutdown the vm and
> import it
> using virt-v2v.
>
>
> Regards,
> René
> >
> >
> >
> > Lasse Lindgren
> > .”
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
>
>
>
> --
> Lasse Lindgren
> .”
>
11 years, 1 month
[Users] Migrate existing guest (kvm/libvirtd) into ovirt environment
by lasse lindgren
Hello!
I have thougts about if there is a way to take an existing vm (kvm) running
on BoxA that only uses libvirt and deploy that on BoxB, running ovirt (oVirt
Engine Version: 3.3.0-4.fc19)
The existing vm, is a linux host, and the disk type is qcow2
And since it's a working configuration i just wan't do move/migrate it to
the ovirt envionment.
Lasse Lindgren
.”
11 years, 1 month
[Users] unable to start vm in 3.3 and f19 with gluster
by Gianluca Cecchi
Hello,
I'm testing GlusterFS on 3.3 with fedora 19 systems.
One engine (ovirt) + 2 nodes (ovnode01 and ovnode02)
Successfully created gluster volume composed by two bricks (one for
each vdsm node) distributed replicated
Suggestion:
If page
http://www.ovirt.org/Features/GlusterFS_Storage_Domain
is the reference, perhaps it would be better to explicitly specify
that one has to start the created volume before going to add a storage
domain based on the created volume.
Not knowing Gluster could lead to think that the start phase is
responsibility of storage domain creation itself ...
All seems ok from a configuration point of view.
Uploaded a CentOS 6.4 iso image ito my ISO_DOMAIN (nfs exported from
engine.. this will be another thread...)
Created a server VM with 10Gb of disk with thin allocation.
I get an error when starting the VM
on engine.log
2013-09-25 00:43:16,027 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(DefaultQuartzScheduler_Worker-44) Rerun vm
409c5dbe-5e70-40de-bf73-46ef484ea2d7. Called from vds ovnode02
2013-09-25 00:43:16,031 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(pool-6-thread-48) Correlation ID: 5ea15175, Job ID:
48128550-3633-4da4-8d9c-ab704be02f02, Call Stack: null, Custom Event
ID: -1, Message: Failed to run VM C6 on Host ovnode02.
2013-09-25 00:43:16,057 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-48) Lock Acquired to object EngineLock [exclusiveLocks=
key: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 value: VM
, sharedLocks= ]
2013-09-25 00:43:16,070 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-48) START, IsVmDuringInitiatingVDSCommand( vmId =
409c5dbe-5e70-40de-bf73-46ef484ea2d7), log id: 7979c53b
2013-09-25 00:43:16,071 INFO
[org.ovirt.engine.core.vdsbroker.IsVmDuringInitiatingVDSCommand]
(pool-6-thread-48) FINISH, IsVmDuringInitiatingVDSCommand, return:
false, log id: 7979c53b
2013-09-25 00:43:16,086 INFO [org.ovirt.engine.core.bll.RunVmCommand]
(pool-6-thread-48) Running command: RunVmCommand internal: false.
Entities affected : ID: 409c5dbe-5e70-40de-bf73-46ef484ea2d7 Type: VM
2013-09-25 00:43:16,110 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand]
(pool-6-thread-48) START, IsoPrefixVDSCommand( storagePoolId =
6b3175e6-6fa2-473f-ba21-38917c413ba9, ignoreFailoverLimit = false),
log id: 7fd62f0f
2013-09-25 00:43:16,111 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.IsoPrefixVDSCommand]
(pool-6-thread
...
On node vdsm.log
Thread-2915::ERROR::2013-09-25
00:43:20,108::vm::2062::vm.Vm::(_startUnderlyingVm)
vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::The vm start process
failed
Traceback (most recent call last):
File "/usr/share/vdsm/vm.py", line 2022, in _startUnderlyingVm
self._run()
File "/usr/share/vdsm/vm.py", line 2906, in _run
self._connection.createXML(domxml, flags),
File "/usr/lib64/python2.7/site-packages/vdsm/libvirtconnection.py",
line 76, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 2805, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: Unable to read from monitor: Connection reset by peer
Thread-2915::DEBUG::2013-09-25
00:43:20,176::vm::2448::vm.Vm::(setDownStatus)
vmId=`409c5dbe-5e70-40de-bf73-46ef484ea2d7`::Changed state to Down:
Unable to read from monitor: Connection reset by peer
libvirtEventLoop::WARNING::2013-09-25
00:43:20,114::clientIF::337::vds::(teardownVolumePath) Drive is not a
vdsm image: VOLWM_CHUNK_MB:1024 VOLWM_CHUNK_REPLICATE_MULT:2
VOLWM_FREE_PCT:50 _blockDev:False _checkIoTuneCategories:<bound method
Drive._checkIoTuneCategories of <vm.Drive object at 0x2b12950>>
_customize:<bound method Drive._customize of <vm.Drive object at
0x2b12950>> _deviceXML:<disk device="cdrom" snapshot="no"
type="file"><source
file="/var/run/vdsm/payload/409c5dbe-5e70-40de-bf73-46ef484ea2d7.393db1d8c9e756483db001b30a239296.img"
startupPolicy="optional"/><target bus="ide"
dev="hdd"/><readonly/><serial></serial></disk> _makeName:<bound method
Drive._makeName of <vm.Drive object at 0x2b12950>>
_validateIoTuneParams:<bound method Drive._validateIoTuneParams of
<vm.Drive object at 0x2b12950>> apparentsize:0 blockDev:False
cache:none conf:{'status': 'Down', 'acpiEnable': 'true',
'emulatedMachine': 'pc-1.0', 'vmId':
'409c5dbe-5e70-40de-bf73-46ef484ea2d7', 'pid': '0',
'memGuaranteedSize': 1365, 'timeOffset': '0', 'keyboardLayout':
'en-us', 'displayPort': '-1', 'displaySecurePort': '-1',
'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Nehalem', 'custom': {},
'clientIp': '', 'exitCode': 1, 'nicModel': 'rtl8139,pv',
'smartcardEnable': 'false', 'kvmEnable': 'true', 'pitReinjection':
'false', 'transparentHugePages': 'true', 'devices': [{'device':
'scsi', 'model': 'virtio-scsi', 'type': 'controller'}, {'device':
'qxl', 'specParams': {'vram': '65536'}, 'type': 'video', 'deviceId':
'70eadea2-6b53-
Let me know if you need full logs
The disk image itself seems ok:
[root@ovnode02 ~]# ll
/rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/
total 1025
-rw-rw----. 1 vdsm kvm 10737418240 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161
-rw-rw----. 1 vdsm kvm 1048576 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161.lease
-rw-r--r--. 1 vdsm kvm 268 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161.meta
[root@ovnode02 ~]# qemu-img info
/rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
image: /rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 0
Also on the other node
[root@ovnode01 vdsm]# ll
/rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/
total 1025
-rw-rw----. 1 vdsm kvm 10737418240 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161
-rw-rw----. 1 vdsm kvm 1048576 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161.lease
-rw-r--r--. 1 vdsm kvm 268 Sep 25 00:42
dff09892-bc60-4de5-85c0-2a1fa215a161.meta
[root@ovnode01 vdsm]# qemu-img info
/rhev/data-center/mnt/glusterSD/ovnode01\:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
image: /rhev/data-center/mnt/glusterSD/ovnode01:gv01/20042e7b-0929-48ca-ad40-2a2aa22f0689/images/d004045e-620b-4d90-8a7f-6c6d26393a08/dff09892-bc60-4de5-85c0-2a1fa215a161
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: 0
[root@ovnode02 ~]# gluster volume list
gv01
[root@ovnode02 ~]# gluster volume info
Volume Name: gv01
Type: Replicate
Volume ID: 7cf18f87-eef8-47cb-b469-8e5f92bfcd98
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.33.41:/export/brick1/sdb1
Brick2: 192.168.33.42:/export/brick1/sdb1
Options Reconfigured:
storage.owner-gid: 36
storage.owner-uid: 36
auth.allow: *
user.cifs: on
nfs.disable: off
I notice during the volume creation this message that I don't know if
could be of impact:
Volume Option group=virt could not be set on gv01
See also this image for events generated on egine gui.
https://docs.google.com/file/d/0BwoPbcrMv8mvZEp6UmhPV0ttaVU/edit?usp=sharing
Possibly the openstack related ones could be misunderstood and
sincerely I haven't understood their meaning....
Gianluca
11 years, 1 month