[Users] Problem with libvirt
by Juan Jose
Hello everybody,
I have installed and configured oVirt 3.1 engine in a Fedora 17 with a
Fedora 17 node connected. Ihave defined a NFS domain for my VM and another
for ISOs. I try to start a Fedora 17 Server with Run once and the machi
start without problems, after that I preceed with the installation in its
wirtual disk but when I arrive to define partitions in the virtual disk the
machine is freeze and I start to receive engine errors and the default data
center go in non responsive status.
I can see this messages in /var/log/ovirt-engine/engine.log, which I attach
to this message:
....
2013-01-31 11:43:23,957 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] Recieved a Device without an address
when processing VM da09284e-3189-428b-a879-6201f7a5ca87 devices, skipping
device: {shared=false, volumeID=1d0e9fdf-c4bc-4894-8ff1-7a5e185d57a4,
index=0, propagateErrors=off, format=raw, type=disk, truesize=8589938688,
reqsize=0, bootOrder=2, iface=virtio,
volumeChain=[Ljava.lang.Object;@1ea2bdf9,
imageID=49e21bfc-384b-4bea-8013-f02b1be137c7,
domainID=57d184a0-908b-49b5-926f-cd413b9e6526, specParams={},
optional=false, needExtend=false,
path=/rhev/data-center/d6e7e8b8-49c7-11e2-a261-000a5e429f63/57d184a0-908b-49b5-926f-cd413b9e6526/images/49e21bfc-384b-4bea-8013-f02b1be137c7/1d0e9fdf-c4bc-4894-8ff1-7a5e185d57a4,
device=disk, poolID=d6e7e8b8-49c7-11e2-a261-000a5e429f63, readonly=false,
deviceId=49e21bfc-384b-4bea-8013-f02b1be137c7, apparentsize=8589934592}.
2013-01-31 11:43:23,960 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=4dca1c64-dbf8-4e31-b359-82cf0e259f65,Device=qxl,Type=video,BootOrder=0,SpecParams={vram=65536},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:23,961 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=aba73f2f-e951-4eba-9da4-8fb58315df2c,Device=memballoon,Type=balloon,BootOrder=0,SpecParams={model=virtio},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:23,962 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=9bfb770c-13fa-4bf6-9f1f-414927bc31b0,Device=cdrom,Type=disk,BootOrder=0,SpecParams={path=},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:23,963 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=614bc0b4-64d8-4058-8bf8-83db62617e00,Device=bridge,Type=interface,BootOrder=0,SpecParams={},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:23,964 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-47) [75664f2b] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=49e21bfc-384b-4bea-8013-f02b1be137c7,Device=disk,Type=disk,BootOrder=0,SpecParams={},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=false,alias=
2013-01-31 11:43:26,063 INFO
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM Fedora17
da09284e-3189-428b-a879-6201f7a5ca87 moved from WaitForLaunch --> PoweringUp
2013-01-31 11:43:26,064 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(QuartzScheduler_Worker-24) [7d021319] START, FullListVdsCommand(vdsId =
7d3491e8-49ce-11e2-8b2e-000a5e429f63, vds=null,
vmIds=[da09284e-3189-428b-a879-6201f7a5ca87]), log id: f68f564
2013-01-31 11:43:26,086 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.FullListVdsCommand]
(QuartzScheduler_Worker-24) [7d021319] FINISH, FullListVdsCommand, return:
[Lorg.ovirt.engine.core.vdsbroker.xmlrpc.XmlRpcStruct;@33c68023, log id:
f68f564
2013-01-31 11:43:26,091 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=aba73f2f-e951-4eba-9da4-8fb58315df2c,Device=memballoon,Type=balloon,BootOrder=0,SpecParams={model=virtio},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:26,092 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-24) [7d021319] VM
da09284e-3189-428b-a879-6201f7a5ca87 managed non plugable device was
removed unexpetedly from libvirt:
VmId=da09284e-3189-428b-a879-6201f7a5ca87,DeviceId=9bfb770c-13fa-4bf6-9f1f-414927bc31b0,Device=cdrom,Type=disk,BootOrder=0,SpecParams={path=},Address=,IsManaged=true,IsPlugged=true,IsReadOnly=true,alias=
2013-01-31 11:43:31,721 INFO
[org.ovirt.engine.core.bll.SetVmTicketCommand] (ajp--0.0.0.0-8009-11)
[28d7a789] Running command: SetVmTicketCommand internal: false. Entities
affected : ID: da09284e-3189-428b-a879-6201f7a5ca87 Type: VM
2013-01-31 11:43:31,724 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--0.0.0.0-8009-11) [28d7a789] START, SetVmTicketVDSCommand(vdsId =
7d3491e8-49ce-11e2-8b2e-000a5e429f63,
vmId=da09284e-3189-428b-a879-6201f7a5ca87, ticket=qmcnuOICblb3,
validTime=120,m userName=admin@internal,
userId=fdfc627c-d875-11e0-90f0-83df133b58cc), log id: 6eaacb95
2013-01-31 11:43:31,758 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--0.0.0.0-8009-11) [28d7a789] FINISH, SetVmTicketVDSCommand, log id:
6eaacb95
...
2013-01-31 11:49:13,392 WARN
[org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
(QuartzScheduler_Worker-81) [164eaa47] domain
57d184a0-908b-49b5-926f-cd413b9e6526 in problem. vds: host1
2013-01-31 11:49:54,121 ERROR
[org.ovirt.engine.core.vdsbroker.VdsUpdateRunTimeInfo]
(QuartzScheduler_Worker-73) [73213e4f] vds::refreshVdsStats Failed
getVdsStats, vds = 7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1, error =
VDSNetworkException: VDSNetworkException:
2013-01-31 11:49:54,172 WARN [org.ovirt.engine.core.vdsbroker.VdsManager]
(QuartzScheduler_Worker-73) [73213e4f]
ResourceManager::refreshVdsRunTimeInfo::Failed to refresh VDS , vds =
7d3491e8-49ce-11e2-8b2e-000a5e429f63 : host1, VDS Network Error, continuing.
VDSNetworkException:
....
In the events windows after VM freezing, I have below events:
2013-Jan-31, 11:50:52 Failed to elect Host as Storage Pool Manager for
Data Center Default. Setting status to Non-Operational.
2013-Jan-31, 11:50:52 VM Fedora17 was set to the Unknown status.
2013-Jan-31, 11:50:52 Host host1 is non-responsive.
2013-Jan-31, 11:49:55 Invalid status on Data Center Default. Setting Data
Center status to Non-Responsive (On host host1, Error: Network error during
communication with the Host.).
2013-Jan-31, 11:44:25 VM Fedora17 started on Host host1
Any suggest about the problem?. It seem a libvirt problem, I will continue
investigating.
Many thanks in avanced,
Juanjo.
11 years, 9 months
[Users] USB mapping.
by Alexei Yakovlev
Hi, is it any way to attach USB devices that is connected to host with VM? I need a USB key to be visible at virtual machine, the key is attached physically to oVirt Host under Fedora 17.
11 years, 9 months
[Users] Testday aftermath
by Joop
Yesterday was testday but not much fun :-(
I had a reasonable working setup but for testday I decided to start from
scratch and that ended rather soon. Installing and configuring engine was
not a problem but I want a setup where I have two gluster hosts and two
hosts as vmhosts.
I added a second cluster using the webinterface set it to gluster storage
and added two minimal installed Fedora 18 hosts where I setup static
networking and verified that it worked.
Adding the two hosts went OK but adding a Volume gives the following error
on engine:
2013-02-01 09:32:39,084 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Running command:
CreateGlusterVolumeCommand internal: false. Entities affected : ID:
8720debc-a184-4b61-9fa8-0fdf4d339b9a Type: VdsGroups
2013-02-01 09:32:39,117 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] START,
CreateGlusterVolumeVDSCommand(HostName = st02, HostId =
e7b74172-2f95-43cb-83ff-11705ae24265), log id: 4270f4ef
2013-02-01 09:32:39,246 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc
[mCode=4106, mMessage=XML error
error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput><volCreate><count>2</count><bricks>
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
</bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
]
2013-02-01 09:32:39,248 WARN
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Weird return value: StatusForXmlRpc
[mCode=4106, mMessage=XML error
error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput><volCreate><count>2</count><bricks>
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
</bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
]
2013-02-01 09:32:39,249 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Failed in CreateGlusterVolumeVDS method
2013-02-01 09:32:39,250 ERROR
[org.ovirt.engine.core.vdsbroker.vdsbroker.BrokerCommandBase]
(ajp--127.0.0.1-8702-4) [5ea886d] Error code unexpected and error message
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = XML error
error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput><volCreate><count>2</count><bricks>
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
</bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
2013-02-01 09:32:39,254 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (ajp--127.0.0.1-8702-4)
[5ea886d] Command CreateGlusterVolumeVDS execution failed. Exception:
VDSErrorException: VDSGenericException: VDSErrorException: Failed to
CreateGlusterVolumeVDS, error = XML error
error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput><volCreate><count>2</count><bricks>
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
</bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
2013-02-01 09:32:39,255 INFO
[org.ovirt.engine.core.vdsbroker.gluster.CreateGlusterVolumeVDSCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] FINISH, CreateGlusterVolumeVDSCommand,
log id: 4270f4ef
2013-02-01 09:32:39,256 ERROR
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Command
org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand throw Vdc Bll
exception. With error message VdcBLLException:
org.ovirt.engine.core.vdsbroker.vdsbroker.VDSErrorException:
VDSGenericException: VDSErrorException: Failed to CreateGlusterVolumeVDS,
error = XML error
error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput><volCreate><count>2</count><bricks>
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
</bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
2013-02-01 09:32:39,268 INFO
[org.ovirt.engine.core.bll.gluster.CreateGlusterVolumeCommand]
(ajp--127.0.0.1-8702-4) [5ea886d] Lock freed to object EngineLock
[exclusiveLocks= key: 8720debc-a184-4b61-9fa8-0fdf4d339b9a value: GLUSTER
, sharedLocks= ]
2013-02-01 09:32:40,902 INFO
[org.ovirt.engine.core.vdsbroker.gluster.GlusterVolumesListVDSCommand]
(QuartzScheduler_Worker-85) START, GlusterVolumesListVDSCommand(HostName =
st02, HostId = e7b74172-2f95-43cb-83ff-11705ae24265), log id: 61cafb32
And on ST01 the following in vdsm.log
Thread-644::DEBUG::2013-02-01
10:24:06,378::BindingXMLRPC::913::vds::(wrapper) client
[192.168.216.150]::call volumeCreate with ('GlusterData',
['st01.nieuwland.nl:/home/gluster-data',
'st02.nieuwland.nl:/home/gluster-data'], 2, 0, ['TCP']) {}
MainProcess|Thread-644::DEBUG::2013-02-01
10:24:06,381::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster
--mode=script volume create GlusterData replica 2 transport TCP
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
--xml' (cwd None)
MainProcess|Thread-644::DEBUG::2013-02-01
10:24:06,639::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainProcess|Thread-644::ERROR::2013-02-01
10:24:06,640::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/cli.py", line 446, in volumeCreate
xmltree = _execGlusterXml(command)
File "/usr/share/vdsm/gluster/cli.py", line 89, in _execGlusterXml
raise ge.GlusterXmlErrorException(err=out)
GlusterXmlErrorException: XML error
error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput><volCreate><count>2</count><bricks>
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
</bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
Thread-644::ERROR::2013-02-01
10:24:06,655::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 63, in volumeCreate
transportList)
File "/usr/share/vdsm/supervdsm.py", line 81, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeCreate
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
raise convert_to_error(kind, result)
GlusterXmlErrorException: XML error
error: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<cliOutput><volCreate><count>2</count><bricks>
st01.nieuwland.nl:/home/gluster-data st02.nieuwland.nl:/home/gluster-data
</bricks><transport>tcp</transport><type>2</type><volname>GlusterData</volname><replica-count>2</replica-count></volCreate></cliOutput>
And on ST02 I get this error:
MainProcess|Thread-93::DEBUG::2013-02-01
10:24:09,540::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster
--mode=script peer status --xml' (cwd None)
MainProcess|Thread-93::DEBUG::2013-02-01
10:24:09,622::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
Thread-93::DEBUG::2013-02-01
10:24:09,624::BindingXMLRPC::920::vds::(wrapper) return hostsList with
{'status': {'message': 'Done', 'code': 0}, 'hosts': [{'status':
'CONNECTED', 'hostname': '192.168.216.152', 'uuid':
'15c7a739-6735-43f5-a1c0-3c7ff3469588'}, {'status': 'CONNECTED',
'hostname': '192.168.216.151', 'uuid':
'd53f4dcb-1116-4fbc-8d5b-e882175264aa'}]}
Thread-94::DEBUG::2013-02-01
10:24:09,639::BindingXMLRPC::913::vds::(wrapper) client
[192.168.216.150]::call volumesList with () {}
MainProcess|Thread-94::DEBUG::2013-02-01
10:24:09,641::misc::84::Storage.Misc.excCmd::(<lambda>) '/usr/sbin/gluster
--mode=script volume info --xml' (cwd None)
MainProcess|Thread-94::DEBUG::2013-02-01
10:24:09,724::misc::84::Storage.Misc.excCmd::(<lambda>) SUCCESS: <err> =
''; <rc> = 0
MainProcess|Thread-94::ERROR::2013-02-01
10:24:09,725::supervdsmServer::80::SuperVdsm.ServerCallback::(wrapper)
Error in wrapper
Traceback (most recent call last):
File "/usr/share/vdsm/supervdsmServer.py", line 78, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/supervdsmServer.py", line 352, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/cli.py", line 45, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/cli.py", line 431, in volumeInfo
raise ge.GlusterXmlErrorException(err=[etree.tostring(xmltree)])
GlusterXmlErrorException: XML error
error: <cliOutput><opRet>0</opRet><opErrno>0</opErrno><opErrstr
/><volInfo><volumes><volume><name>GlusterData</name><id>e5cf9724-ecb0-41a3-abdf-0ea891e49f92</id><type>2</type><status>0</status><brickCount>2</brickCount><distCount>2</distCount><stripeCount>1</stripeCount><replicaCount>2</replicaCount><transport>0</transport><bricks><brick>st01.nieuwland.nl:/home/gluster-data</brick><brick>st02.nieuwland.nl:/home/gluster-data</brick></bricks><optCount>0</optCount><options
/></volume><count>1</count></volumes></volInfo></cliOutput>
Thread-94::ERROR::2013-02-01
10:24:09,741::BindingXMLRPC::929::vds::(wrapper) vdsm exception occured
Traceback (most recent call last):
File "/usr/share/vdsm/BindingXMLRPC.py", line 918, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.py", line 56, in volumesList
return {'volumes': self.svdsmProxy.glusterVolumeInfo(volumeName)}
File "/usr/share/vdsm/supervdsm.py", line 81, in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py", line 72, in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeInfo
File "/usr/lib64/python2.7/multiprocessing/managers.py", line 773, in
_callmethod
raise convert_to_error(kind, result)
GlusterXmlErrorException: XML error
I have included a lot more logs in the attached zip.
Running gluster volume info shows me on both server the correct data.
The directory is created under /home, SELinux was enforcing but permissive
gives the same error.
[root@st01 ~]# rpm -aq | grep vdsm
vdsm-gluster-4.10.3-6.fc18.noarch
vdsm-xmlrpc-4.10.3-6.fc18.noarch
vdsm-cli-4.10.3-6.fc18.noarch
vdsm-python-4.10.3-6.fc18.x86_64
vdsm-4.10.3-6.fc18.x86_64
[root@st01 ~]# rpm -aq | grep gluster
glusterfs-fuse-3.3.1-8.fc18.x86_64
vdsm-gluster-4.10.3-6.fc18.noarch
glusterfs-3.3.1-8.fc18.x86_64
glusterfs-server-3.3.1-8.fc18.x86_64
libvirt-0.10.2.2-3.fc18.x86_64 + deps same version
qemu-kvm-1.2.2-2.fc18.x86_64 + deps same version
Anything else I can provide do to fix this problem?
Joop
--
irc: jvandewege
@fosdem ;-)
Anything else?
11 years, 9 months
[Users] Glusterfs HA doubts
by Adrian Gibanel
In oVirt 3.1 GlusterFS support was added. It was an easy way to replicate your virtual machine storage without too much hassle.
There are two main howtos:
* http://www.middleswarth.net/content/installing-ovirt-31-and-glusterfs-usi... (Robert Middleswarth)
* http://blog.jebpages.com/archives/ovirt-3-1-glusterized/ (Jason Brooks).
1) What about performance?
I've done some tests with rsync backups (even using the suggested --inplace rsync switch) that implies small files. These backups were done into local mounted glusterfs volumes. Backups instead of lasting about 2 hours they lasted like 15 hours long.
Is there maybe something that only happens with small files and with big files performance is ok?
2) How to know the current status?
In DRBD you know it checking a proc file if I remember it well. I remember too that GlusterFS doesn't have an equivalent thing and there's no evident way to know if all the files are synced.
If you have tried it how do you know if both sets of virtual disks images are synced?
3) Mount dns resolution
If you check Jason Brooks howto you will see that it uses a hostname for refering to nfs mount. If you want to perform HA you need your storage to be mounted and if the server1 host is down it doesn't help that the nfs mount point associated to the storage is server1:/vms/ and not server2:/vms/. Checking Middleswarth howto I think that he does the same thing.
Let's explain a bit more so that understand. My example setup is the one where you have two host machines where you run a set of virtual machines on one and the other one doesn't have any virtual machine running. Where is the virtual machines storage located? It's located at the glusterfs volume.
So the first one of the machines mounts the glusterfs volume as nfs (It's an example).
If it uses its own hostname for the nfs mount then if itself goes down the second host isn't going to mount it when it's restarted in the HA mode.
So the first one of the machines mounts the glusterfs volume as nfs (It's an example).
If it uses the second host hostname for the nfs mount then if the second host goes down the virtual machine cannot access its virtual disks.
A workaround for this situation which I have thought is to use /etc/hosts on both machines so that:
whatever.domain.com
resolves in both hosts to the host self's ip.
I think that glusterfs has a way of mounting their share through "-t glusterfs" that somehow can ignore these hostnames problems but I haven't read it too much about it so I'm not too sure.
4) So my doubts basically are:
* Has anyone setup a two host glusterfs HA oVirt cluster where storage is shared by a replicated Glusterfs volume that is shared and stored by both of them?
* Does HA work when one of the host goes down?
* Or does it complain about hostname as I suspect?
* Any tips to ensure the best performance?
Thank you.
--
--
Adrián Gibanel
I.T. Manager
+34 675 683 301
www.btactic.com
Ens podeu seguir a/Nos podeis seguir en:
i
Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en el medio ambiente. El medio ambiente es cosa de todos.
AVIS:
El contingut d'aquest missatge i els seus annexos és confidencial. Si no en sou el destinatari, us fem saber que està prohibit utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització corresponent. Si heu rebut aquest missatge per error, us agrairem que ho feu saber immediatament al remitent i que procediu a destruir el missatge .
AVISO:
El contenido de este mensaje y de sus anexos es confidencial. Si no es el destinatario, les hacemos saber que está prohibido utilizarlo, divulgarlo y/o copiarlo sin tener la autorización correspondiente. Si han recibido este mensaje por error, les agradeceríamos que lo hagan saber inmediatamente al remitente y que procedan a destruir el mensaje .
11 years, 9 months
[Users] How to connect to console after another user logged out
by Gianluca Cecchi
Hello,
I have 3.2 beta and a Fedra 18 VM configured with spice access.
One user connects to portal and access it, then terminates his session
and closes the spice console window and log out from portal.
Another user connects to portal and tries to connect to the same VM
that is still powered on.
It receives this message:
Error:
F18:
Console connection denied. Another user has already accessed the
console of this VM. The VM should be rebooted to allow another user to
access it, or changed by an admin to not enforce reboot between users
accessing its console.
I didn't find a way to accomplish the "or changed by an admin to not
enforce reboot..."
Where to set this?
Thanks,
Gianluca
11 years, 9 months
[Users] Cannot add local storage on 3.2
by Matt .
Hi All,
I have made a successfull install of 3.2 but there seems to be changed a
lot about local storage comparing to 3.1
When I want to add storage on a local host there is a Datacenter and
Cluster created for this Host, named by the name of the host. After this,
the local storage is not available and I'm not able to add extra also.
I have looked at "adding a new storage domains" and when I select my host I
still need to set my path, but I did this already by configuring local
storage before.
The message I receive while configuring the local storage is:
error cannot add storage. internal error storage connection doesn't exist
What goes wrong here ? This was much easier on 3.1!
11 years, 9 months
[Users] 3.2 beta and f18 host on dell R815 problem
by Gianluca Cecchi
during install of server I get this
Host installation failed. Fix installation issues and try to Re-Install
In deploy log
2013-01-31 12:17:30 DEBUG
otopi.plugins.ovirt_host_deploy.vdsm.hardware
hardware._isVirtualizationEnabled:144 virtualization support
GenuineIntel (cpu: False, bios: True)
2013-01-31 12:17:30 DEBUG otopi.context context._executeMethod:127
method exception
Traceback (most recent call last):
File "/tmp/ovirt-SfEARpd3h4/pythonlib/otopi/context.py", line 117,
in _executeMethod
method['method']()
File "/tmp/ovirt-SfEARpd3h4/otopi-plugins/ovirt-host-deploy/vdsm/hardware.py",
line 170, in _validate_virtualization
_('Hardware does not support virtualization')
RuntimeError: Hardware does not support virtualization
2013-01-31 12:17:30 ERROR otopi.context context._executeMethod:136
Failed to execute stage 'Setup validation': Hardware does not support
virtualization
note the GenuineIntel above... ??
But actually it is AMD
[root@f18ovn03 ~]# lsmod|grep kvm
kvm_amd 59623 0
kvm 431794 1 kvm_amd
cat /proc/cpuinfo
...
processor : 47
vendor_id : AuthenticAMD
cpu family : 16
model : 9
model name : AMD Opteron(tm) Processor 6174
stepping : 1
microcode : 0x10000d9
cpu MHz : 800.000
cache size : 512 KB
physical id : 3
siblings : 12
core id : 5
cpu cores : 12
apicid : 59
initial apicid : 59
fpu : yes
fpu_exception : yes
cpuid level : 5
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt
pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl
nonstop_tsc extd_apicid amd_dcm pni monitor cx16 popcnt lahf_lm
cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch
osvw ibs skinit wdt nodeid_msr hw_pstate npt lbrv svm_lock nrip_save
pausefilter
bogomips : 4400.44
TLB size : 1024 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm stc 100mhzsteps hwpstate
Any hint?
Gianluca
11 years, 9 months
[Users] 3.2 beta: Amd Opteron 6174 wrongly detected as 8 socket
by Gianluca Cecchi
Hello,
after deploy of a node that has 4 sockets with 12cores each, it is
wrongly detected in web admin gui.
See:
https://docs.google.com/file/d/0BwoPbcrMv8mvdjdYNjVfT2NWY0U/edit
It says 8 sockets each with 6 cores....
Output of
# virsh capabilities
here:
https://docs.google.com/file/d/0BwoPbcrMv8mveG5OaVBZN1VENlU/edit
output of cpuid here:
https://docs.google.com/file/d/0BwoPbcrMv8mvUFFRYkZEX0lmRG8/edit
also run this
[root@f18ovn03 ~]# vdsClient -s 0 getVdsCaps
HBAInventory = {'iSCSI': [{'InitiatorName':
'iqn.1994-05.com.redhat:f9baf5a8f6c3'}], 'FC': []}
ISCSIInitiatorName = iqn.1994-05.com.redhat:f9baf5a8f6c3
bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask':
'', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr':
'', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
'00:00:00:00:00:00'}}
bridges = {'ovirtmgmt': {'addr': '192.168.1.102', 'cfg': {'DOMAIN':
'localdomain.local', 'UUID': '60d40d4a-d8ab-4f5b-bd48-2e807df36be4',
'DNS3': '82.113.193.3', 'IPADDR0': '192.168.1.102', 'DNS1':
'192.168.1.103', 'PREFIX0': '24', 'DEFROUTE': 'yes',
'IPV4_FAILURE_FATAL': 'no', 'DELAY': '0', 'NM_CONTROLLED': 'no',
'BOOTPROTO': 'none', 'GATEWAY0': '192.168.1.1', 'DNS2': '8.8.8.8',
'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT': 'yes', 'IPV6INIT':
'no'}, 'mtu': '1500', 'netmask': '255.255.255.0', 'stp': 'off',
'ports': ['em1']}}
clusterLevels = ['3.0', '3.1', '3.2']
cpuCores = 48
cpuFlags = fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,mmx,fxsr,sse,sse2,ht,syscall,nx,mmxext,fxsr_opt,pdpe1gb,rdtscp,lm,3dnowext,3dnow,constant_tsc,rep_good,nopl,nonstop_tsc,extd_apicid,amd_dcm,pni,monitor,cx16,popcnt,lahf_lm,cmp_legacy,svm,extapic,cr8_legacy,abm,sse4a,misalignsse,3dnowprefetch,osvw,ibs,skinit,wdt,nodeid_msr,hw_pstate,npt,lbrv,svm_lock,nrip_save,pausefilter,model_athlon,model_Opteron_G3,model_Opteron_G1,model_phenom,model_Opteron_G2
cpuModel = AMD Opteron(tm) Processor 6174
cpuSockets = 8
cpuSpeed = 800.000
cpuThreads = 48
emulatedMachines = ['pc-1.2', 'none', 'pc', 'pc-1.1', 'pc-1.0',
'pc-0.15', 'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10',
'isapc', 'pc-1.2', 'none', 'pc', 'pc-1.1', 'pc-1.0', 'pc-0.15',
'pc-0.14', 'pc-0.13', 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc']
guestOverhead = 65
hooks = {}
kvmEnabled = true
lastClient = 192.168.1.111
lastClientIface = ovirtmgmt
management_ip =
memSize = 64418
netConfigDirty = False
networks = {'ovirtmgmt': {'iface': 'ovirtmgmt', 'addr':
'192.168.1.102', 'cfg': {'DOMAIN': 'localdomain.local', 'UUID':
'60d40d4a-d8ab-4f5b-bd48-2e807df36be4', 'DNS3': '82.113.193.3',
'IPADDR0': '192.168.1.102', 'DNS1': '192.168.1.103', 'PREFIX0': '24',
'DEFROUTE': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'DELAY': '0',
'NM_CONTROLLED': 'no', 'BOOTPROTO': 'none', 'GATEWAY0': '192.168.1.1',
'DNS2': '8.8.8.8', 'DEVICE': 'ovirtmgmt', 'TYPE': 'Bridge', 'ONBOOT':
'yes', 'IPV6INIT': 'no'}, 'mtu': '1500', 'netmask': '255.255.255.0',
'stp': 'off', 'bridged': True, 'gateway': '0.0.0.0', 'ports':
['em1']}}
nics = {'em4': {'addr': '', 'cfg': {'PEERROUTES': 'yes', 'UUID':
'bed68125-4345-4995-ba49-a6e5580c58dd', 'NAME': 'em4', 'TYPE':
'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE': 'yes', 'PEERDNS':
'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR': '00:25:64:F9:76:82',
'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes', 'IPV6_FAILURE_FATAL':
'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE': 'yes', 'ONBOOT':
'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:25:64:f9:76:82', 'speed': 0}, 'em1': {'addr': '', 'cfg':
{'BRIDGE': 'ovirtmgmt', 'DOMAIN': 'localdomain.local', 'DEVICE':
'em1', 'UUID': '60d40d4a-d8ab-4f5b-bd48-2e807df36be4', 'DNS3':
'82.113.193.3', 'IPADDR0': '192.168.1.102', 'DNS1': '192.168.1.103',
'PREFIX0': '24', 'DEFROUTE': 'yes', 'IPV4_FAILURE_FATAL': 'no',
'NM_CONTROLLED': 'no', 'GATEWAY0': '192.168.1.1', 'DNS2': '8.8.8.8',
'HWADDR': '00:25:64:f9:76:7c', 'ONBOOT': 'yes', 'IPV6INIT': 'no'},
'mtu': '1500', 'netmask': '', 'hwaddr': '00:25:64:f9:76:7c', 'speed':
1000}, 'em3': {'addr': '', 'cfg': {'PEERROUTES': 'yes', 'UUID':
'2984885c-fbd8-4ad1-a393-00f0a205ae79', 'NAME': 'em3', 'TYPE':
'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE': 'yes', 'PEERDNS':
'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR': '00:25:64:F9:76:80',
'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes', 'IPV6_FAILURE_FATAL':
'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE': 'yes', 'ONBOOT':
'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask': '', 'hwaddr':
'00:25:64:f9:76:80', 'speed': 0}, 'em2': {'addr': '', 'cfg':
{'PEERROUTES': 'yes', 'UUID': 'ebd889bc-57ae-4ee9-8db2-4595309ee81c',
'NAME': 'em2', 'TYPE': 'Ethernet', 'IPV6_PEERDNS': 'yes', 'DEFROUTE':
'yes', 'PEERDNS': 'yes', 'IPV4_FAILURE_FATAL': 'no', 'HWADDR':
'00:25:64:F9:76:7E', 'BOOTPROTO': 'dhcp', 'IPV6_AUTOCONF': 'yes',
'IPV6_FAILURE_FATAL': 'no', 'IPV6_PEERROUTES': 'yes', 'IPV6_DEFROUTE':
'yes', 'ONBOOT': 'yes', 'IPV6INIT': 'yes'}, 'mtu': '1500', 'netmask':
'', 'hwaddr': '00:25:64:f9:76:7e', 'speed': 0}}
operatingSystem = {'release': '1', 'version': '18', 'name': 'Fedora'}
packages2 = {'kernel': {'release': '204.fc18.x86_64', 'buildtime':
1358955869.0, 'version': '3.7.4'}, 'spice-server': {'release':
'1.fc18', 'buildtime': 1356035501, 'version': '0.12.2'}, 'vdsm':
{'release': '6.fc18', 'buildtime': 1359564723, 'version': '4.10.3'},
'qemu-kvm': {'release': '2.fc18', 'buildtime': 1358351894, 'version':
'1.2.2'}, 'libvirt': {'release': '3.fc18', 'buildtime': 1355788803,
'version': '0.10.2.2'}, 'qemu-img': {'release': '2.fc18', 'buildtime':
1358351894, 'version': '1.2.2'}, 'mom': {'release': '1.fc18',
'buildtime': 1349470214, 'version': '0.3.0'}}
reservedMem = 321
software_revision = 6
software_version = 4.10
supportedENGINEs = ['3.0', '3.1']
supportedProtocols = ['2.2', '2.3']
uuid = 4C4C4544-0056-5910-8047-CAC04F4E344A
version_name = Snow Man
vlans = {}
vmTypes = ['kvm']
(this time the host is the inended one... ;-)
Gianluca
11 years, 9 months
[Users] Problems with upgrade to 3.3 nightly
by Gianluca Cecchi
Hello,
passing from an all-in-one setup based on F18 and nightly:
ovirt-engine-setup-plugin-allinone-3.2.0-1.20130125.git032a91f.fc18.noarch
to the proposed new nightly I get these errors:
[root@tekkaman ~]# engine-upgrade
Checking for updates... (This may take several minutes)...[ DONE ]
9 Updates available:
* ovirt-engine-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
* ovirt-engine-backend-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
* ovirt-engine-config-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
* ovirt-engine-dbscripts-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
* ovirt-engine-notification-service-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
* ovirt-engine-restapi-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
* ovirt-engine-tools-common-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
* ovirt-engine-userportal-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
* ovirt-engine-webadmin-portal-3.3.0-0.1.20130201070537.20130201.git357cdaa.fc18.noarch
During the upgrade process, oVirt Engine will not be accessible.
All existing running virtual machines will continue but you will not be able to
start or stop any new virtual machines during the process.
Would you like to proceed? (yes|no): yes
Stopping ovirt-engine service... [ DONE ]
Stopping DB related services... [ DONE ]
Starting DB related services... [ DONE ]
Starting ovirt-engine service... [ DONE ]
[Errno 8] Exec format error
Error: Upgrade failed.
please check log at
/var/log/ovirt-engine/ovirt-engine-upgrade_2013_02_02_13_05_15.log
Is this expected and I have to start from clean environment or should
the upgrade go ok?
engine-upgrade log here:
https://docs.google.com/file/d/0BwoPbcrMv8mvU2ZYX3lOV2V3Tkk/edit?usp=sharing
Gianluca
11 years, 9 months