[Users] Mailinglist naming
by Vincent Van der Kussen
Hi all,
I was just wondering if it would be possible to rename the mailinglist from Users to ovirt-users or something. I think it would be just a bit more clear when looking at your inbox.
This is just a suggestion. Don't take this as critic!
--
Regards,
Vincent
12 years, 1 month
[Users] internet explorer 9
by Cristian Falcas
Hi,
Does anybody know why I can't see anything in IE9? This is from a
Windows 7 machine.
The menus appear, the pages change when I click them, but nothing is
displayed (see attached screenshot).
Best regards,
12 years, 1 month
[Users] Backup and restore VM
by Artem
Hi all,
how i backup and restore the VM, when manager crashes? ( DB crashes, server
manager crashes, other)
ideas?
12 years, 1 month
[Users] can't add hosts due to version compatibility with latest nightly
by Brian Vetter
--Apple-Mail=_E45A5B80-69C5-4C56-8898-345E6DC2254B
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
I decided to start over and reinstall with the latest nightly build. =
When trying to get the system setup, I get the following error when =
trying to add a host.
Host mech is compatible with versions (3.0,3.1) and cannot join Cluster =
DCCluster which is set to version 3.2
I saw no way to create a cluster for any other version (it only provides =
a 3.2 choice in the drop down).
I noticed that the vdsm rpms in the nightly repository were at version =
4.10.1-0.79 as opposed to the 4.10.0 version I saw on the system after =
adding the host. On a lark, I logged into that system and tried =
installing/upgrading the vdsm version manually using the nightly build. =
When I tried this, I got an error saying that it required libvirt >=3D =
0.10.1-1. To get around that, I had to download all of the libvirt rpms =
for 0.10.1-1 (not in yum) and installed them, and then upgraded vdsm.=20
I did seem to run into one issue - the system did not reboot on its own =
(ovirt had it in "reboot" mode). I had to log into the system to reboot =
it manually to get it to move to the next state and activate.
So I have a few questions. Is the ovirt-engine supposed to push a =
matching vdsm version (one that supports 3.2) to the host when it is =
added? If so, it doesn't appear to do that and instead it pushes an =
older one that was only 3.1 "compatible".=20
And if it should have pushed a newer version (possibly matching the =
nightly vdsm build), it seems like there is a push to use a newer =
version of libvirt. I would presume that is coming in FC18. So are the =
current nightly builds expected to only run on a FC18 beta type release =
(which is supposedly coming soon)?
Brian
--Apple-Mail=_E45A5B80-69C5-4C56-8898-345E6DC2254B
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><div>I decided to start over and reinstall with the latest nightly =
build. When trying to get the system setup, I get the following error =
when trying to add a host.</div><div><br></div><blockquote =
class=3D"webkit-indent-blockquote" style=3D"margin: 0 0 0 40px; border: =
none; padding: 0px;"><div>Host mech is compatible with versions =
(3.0,3.1) and cannot join Cluster DCCluster which is set to version =
3.2</div></blockquote><br><div>I saw no way to create a cluster for any =
other version (it only provides a 3.2 choice in the drop =
down).</div><div><br></div><div>I noticed that the vdsm rpms in the =
nightly repository were at version 4.10.1-0.79 as opposed to the 4.10.0 =
version I saw on the system after adding the host. On a lark, I logged =
into that system and tried installing/upgrading the vdsm version =
manually using the nightly build. When I tried this, I got an error =
saying that it required libvirt >=3D 0.10.1-1. To get around =
that, I had to download all of the libvirt rpms for 0.10.1-1 (not in =
yum) and installed them, and then upgraded =
vdsm. </div><div><br></div><div>I did seem to run into one issue - =
the system did not reboot on its own (ovirt had it in "reboot" mode). I =
had to log into the system to reboot it manually to get it to move to =
the next state and activate.</div><div><br></div><div>So I have a few =
questions. Is the ovirt-engine supposed to push a matching vdsm version =
(one that supports 3.2) to the host when it is added? If so, it doesn't =
appear to do that and instead it pushes an older one that was only 3.1 =
"compatible". </div><div><br></div><div>And if it should have =
pushed a newer version (possibly matching the nightly vdsm build), it =
seems like there is a push to use a newer version of libvirt. I would =
presume that is coming in FC18. So are the current nightly builds =
expected to only run on a FC18 beta type release (which is supposedly =
coming =
soon)?</div><div><br></div><div>Brian</div><div><br></div></body></html>=
--Apple-Mail=_E45A5B80-69C5-4C56-8898-345E6DC2254B--
12 years, 1 month
[Users] Remove Default Cluster
by Shaun Glass
Morning,
For some reason after installation, not sure if it was due to DNS failure,
I have two clusters listed in oVirt. The Default has no Hosts, hence no
Guests, and I would like to remove it. However, since it is the default, it
will not allow me to remove it.
Anybody able to assist in this matter ... ?
oVirt 3.1 on Fedora 16 64bit.
Regards
12 years, 1 month
[Users] Import/Migrate of data domain from one server to another.
by Michael Ayers
This is a multipart message in MIME format.
------=_NextPart_000_0117_01CDB5FD.D9CA4E80
Content-Type: text/plain;
charset="us-ascii"
Content-Transfer-Encoding: 7bit
Hey all,
I was running the ovirt manager as a VM on another server. That VM got
nuked and I had to set up the manager on a new VM. I have an old
data-storage domain from the original configured VM that I need to import
into the new manager or convert to an export domain and import that. Is
there a procedure available to do either?
Thanks,
Mike
------=_NextPart_000_0117_01CDB5FD.D9CA4E80
Content-Type: text/html;
charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
<html xmlns:v=3D"urn:schemas-microsoft-com:vml" =
xmlns:o=3D"urn:schemas-microsoft-com:office:office" =
xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" =
xmlns=3D"http://www.w3.org/TR/REC-html40"><head><META =
HTTP-EQUIV=3D"Content-Type" CONTENT=3D"text/html; =
charset=3Dus-ascii"><meta name=3DGenerator content=3D"Microsoft Word 14 =
(filtered medium)"><style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:11.0pt;
font-family:"Calibri","sans-serif";}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
span.EmailStyle17
{mso-style-type:personal-compose;
font-family:"Calibri","sans-serif";
color:windowtext;}
.MsoChpDefault
{mso-style-type:export-only;
font-family:"Calibri","sans-serif";}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]--></head><body lang=3DEN-US link=3Dblue =
vlink=3Dpurple><div class=3DWordSection1><p class=3DMsoNormal>Hey =
all,<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>I was running the ovirt manager as a VM on another =
server. That VM got nuked and I had to set up the manager on a new =
VM. I have an old data-storage domain from the original configured =
VM that I need to import into the new manager or convert to an export =
domain and import that. Is there a procedure available to do =
either?<o:p></o:p></p><p class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Thanks,<o:p></o:p></p><p =
class=3DMsoNormal><o:p> </o:p></p><p =
class=3DMsoNormal>Mike<o:p></o:p></p></div></body></html>
------=_NextPart_000_0117_01CDB5FD.D9CA4E80--
12 years, 1 month
[Users] ovirt not seeing a VM has started.
by Daniel Rowe
Hi
I have my oVirt setup to the point where I can add VM.
I am using Gluster and to get it to be able to add VMs I had to run
"setsebool -P virt_use_fusefs=on" once I did this it seems to work
fine for added VMs.
I have added a Fedora 17 vm, when I start the VM in oVirt, the
management interface never see it as started, just always has the hour
glass next to it.
If I do a PS on the node I can see the VM is indeed started and can
see the qemu-kvm process for that VM.
There are no deny messages in audit.log on the node.
The message log on the node has this message: "VM start: vdsm vm.Vm
WARNING vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::_readPauseCode
unsupported by libvirt vm".
Not sure if this is related.
The vdsm log when I click on the run VM:
Thread-1539::DEBUG::2012-10-30
09:56:09,203::BindingXMLRPC::859::vds::(wrapper) client
[192.168.1.10]::call vmCreate with ({'custom': {'viodiskcache':
'writeback'}, 'keyboardLayout': 'en-us', 'kvmEnable': 'true',
'pitReinjection': 'false', 'acpiEnable': 'true', 'emulatedMachine':
'pc-0.14', 'vmId': '4abdba0d-aa1f-48f1-8160-3cde63165057', 'devices':
[{'device': 'qxl', 'specParams': {'vram': '65536'}, 'type': 'video',
'deviceId': '25ada4cb-1108-491b-8d2c-c05cb353cc99'}, {'index': '2',
'iface': 'ide', 'bootOrder': '1', 'specParams': {'path':
'Fedora-17-x86_64-DVD.iso'}, 'readonly': 'true', 'deviceId':
'1b62fc2d-88e5-42e0-9666-3de861a71c44', 'device': 'cdrom', 'path':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/65f95655-5bbe-47e4-8d3a-6e2ffc5c7761/images/11111111-1111-1111-1111-111111111111/Fedora-17-x86_64-DVD.iso',
'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw',
'type': 'disk', 'volumeID': '12413fe4-a694-439b-95d8-40d5a576dada',
'imageID': '025d5960-23db-4e07-98c8-4c9682155eb1', 'specParams': {},
'readonly': 'false', 'domainID':
'e552b5fd-30fb-4e18-8bc4-26517f11a283', 'deviceId':
'025d5960-23db-4e07-98c8-4c9682155eb1', 'poolID':
'640fa36c-1da8-11e2-86f7-1078d2e9dece', 'device': 'disk', 'shared':
'false', 'propagateErrors': 'off', 'optional': 'false'}, {'nicModel':
'pv', 'macAddr': '00:1a:4a:70:83:00', 'network': 'ovirtmgmt',
'specParams': {}, 'deviceId': '80841d17-f758-46c8-84bc-0346072c4156',
'device': 'bridge', 'type': 'interface'}, {'device': 'memballoon',
'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId':
'09f59ef5-b17f-49ea-99e6-8ee7bcca64a1'}], 'smp': '2', 'vmType': 'kvm',
'timeOffset': '0', 'memSize': 1024, 'spiceSslCipherSuite': 'DEFAULT',
'cpuType': 'Conroe', 'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay',
'smpCoresPerSocket': '2', 'vmName': 'bsdlinux04vm', 'display': 'qxl',
'transparentHugePages': 'true', 'nice': '0'},) {} flowID [8a5456c]
Thread-1539::INFO::2012-10-30
09:56:09,203::API::601::vds::(_getNetworkIp) network None: using 0
Thread-1539::INFO::2012-10-30 09:56:09,204::API::228::vds::(create)
vmContainerLock acquired by vm 4abdba0d-aa1f-48f1-8160-3cde63165057
Thread-1540::DEBUG::2012-10-30
09:56:09,207::vm::564::vm.Vm::(_startUnderlyingVm)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::Start
Thread-1539::DEBUG::2012-10-30 09:56:09,207::API::244::vds::(create)
Total desktops after creation of 4abdba0d-aa1f-48f1-8160-3cde63165057
is 1
Thread-1539::DEBUG::2012-10-30
09:56:09,208::BindingXMLRPC::865::vds::(wrapper) return vmCreate with
{'status': {'message': 'Done', 'code': 0}, 'vmList': {'status':
'WaitForLaunch', 'acpiEnable': 'true', 'emulatedMachine': 'pc-0.14',
'vmId': '4abdba0d-aa1f-48f1-8160-3cde63165057', 'pid': '0',
'timeOffset': '0', 'displayPort': '-1', 'displaySecurePort': '-1',
'spiceSslCipherSuite': 'DEFAULT', 'cpuType': 'Conroe', 'custom':
{'viodiskcache': 'writeback'}, 'clientIp': '', 'nicModel':
'rtl8139,pv', 'keyboardLayout': 'en-us', 'kvmEnable': 'true',
'pitReinjection': 'false', 'transparentHugePages': 'true', 'devices':
[{'device': 'qxl', 'specParams': {'vram': '65536'}, 'type': 'video',
'deviceId': '25ada4cb-1108-491b-8d2c-c05cb353cc99'}, {'index': '2',
'iface': 'ide', 'bootOrder': '1', 'specParams': {'path':
'Fedora-17-x86_64-DVD.iso'}, 'readonly': 'true', 'deviceId':
'1b62fc2d-88e5-42e0-9666-3de861a71c44', 'device': 'cdrom', 'path':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/65f95655-5bbe-47e4-8d3a-6e2ffc5c7761/images/11111111-1111-1111-1111-111111111111/Fedora-17-x86_64-DVD.iso',
'type': 'disk'}, {'index': 0, 'iface': 'virtio', 'format': 'raw',
'type': 'disk', 'volumeID': '12413fe4-a694-439b-95d8-40d5a576dada',
'imageID': '025d5960-23db-4e07-98c8-4c9682155eb1', 'specParams': {},
'readonly': 'false', 'domainID':
'e552b5fd-30fb-4e18-8bc4-26517f11a283', 'deviceId':
'025d5960-23db-4e07-98c8-4c9682155eb1', 'poolID':
'640fa36c-1da8-11e2-86f7-1078d2e9dece', 'device': 'disk', 'shared':
'false', 'propagateErrors': 'off', 'optional': 'false'}, {'nicModel':
'pv', 'macAddr': '00:1a:4a:70:83:00', 'network': 'ovirtmgmt',
'specParams': {}, 'deviceId': '80841d17-f758-46c8-84bc-0346072c4156',
'device': 'bridge', 'type': 'interface'}, {'device': 'memballoon',
'specParams': {'model': 'virtio'}, 'type': 'balloon', 'deviceId':
'09f59ef5-b17f-49ea-99e6-8ee7bcca64a1'}], 'smp': '2', 'vmType': 'kvm',
'memSize': 1024, 'displayIp': '0', 'spiceSecureChannels':
'smain,sinputs,scursor,splayback,srecord,sdisplay',
'smpCoresPerSocket': '2', 'vmName': 'bsdlinux04vm', 'display': 'qxl',
'nice': '0'}}
Thread-1540::DEBUG::2012-10-30
09:56:09,208::vm::568::vm.Vm::(_startUnderlyingVm)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::_ongoingCreations
acquired
Thread-1540::INFO::2012-10-30
09:56:09,210::libvirtvm::1285::vm.Vm::(_run)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::VM wrapper has started
Thread-1540::DEBUG::2012-10-30
09:56:09,211::task::588::TaskManager.Task::(_updateState)
Task=`f91ddc64-360b-4f33-bb53-9c6436a6a6d4`::moving from state init ->
state preparing
Thread-1540::INFO::2012-10-30
09:56:09,211::logUtils::37::dispatcher::(wrapper) Run and protect:
getVolumeSize(sdUUID='e552b5fd-30fb-4e18-8bc4-26517f11a283',
spUUID='640fa36c-1da8-11e2-86f7-1078d2e9dece',
imgUUID='025d5960-23db-4e07-98c8-4c9682155eb1',
volUUID='12413fe4-a694-439b-95d8-40d5a576dada', options=None)
Thread-1540::DEBUG::2012-10-30
09:56:09,212::resourceManager::175::ResourceManager.Request::(__init__)
ResName=`Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283`ReqID=`a4d63aa4-f9af-48c3-b57f-68149d3ea523`::Request
was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at
'registerResource'
Thread-1540::DEBUG::2012-10-30
09:56:09,212::resourceManager::486::ResourceManager::(registerResource)
Trying to register resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' for lock type 'shared'
Thread-1540::DEBUG::2012-10-30
09:56:09,212::resourceManager::528::ResourceManager::(registerResource)
Resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' is free. Now
locking as 'shared' (1 active user)
Thread-1540::DEBUG::2012-10-30
09:56:09,213::resourceManager::212::ResourceManager.Request::(grant)
ResName=`Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283`ReqID=`a4d63aa4-f9af-48c3-b57f-68149d3ea523`::Granted
request
Thread-1540::DEBUG::2012-10-30
09:56:09,213::task::817::TaskManager.Task::(resourceAcquired)
Task=`f91ddc64-360b-4f33-bb53-9c6436a6a6d4`::_resourcesAcquired:
Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283 (shared)
Thread-1540::DEBUG::2012-10-30
09:56:09,213::task::978::TaskManager.Task::(_decref)
Task=`f91ddc64-360b-4f33-bb53-9c6436a6a6d4`::ref 1 aborting False
Thread-1540::DEBUG::2012-10-30
09:56:09,216::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for 12413fe4-a694-439b-95d8-40d5a576dada
Thread-1540::DEBUG::2012-10-30
09:56:09,220::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for 12413fe4-a694-439b-95d8-40d5a576dada
Thread-1540::INFO::2012-10-30
09:56:09,221::logUtils::39::dispatcher::(wrapper) Run and protect:
getVolumeSize, Return response: {'truesize': '0', 'apparentsize':
'8589934592'}
Thread-1540::DEBUG::2012-10-30
09:56:09,221::task::1172::TaskManager.Task::(prepare)
Task=`f91ddc64-360b-4f33-bb53-9c6436a6a6d4`::finished: {'truesize':
'0', 'apparentsize': '8589934592'}
Thread-1540::DEBUG::2012-10-30
09:56:09,222::task::588::TaskManager.Task::(_updateState)
Task=`f91ddc64-360b-4f33-bb53-9c6436a6a6d4`::moving from state
preparing -> state finished
Thread-1540::DEBUG::2012-10-30
09:56:09,222::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283': < ResourceRef
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283', isValid: 'True' obj:
'None'>}
Thread-1540::DEBUG::2012-10-30
09:56:09,222::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1540::DEBUG::2012-10-30
09:56:09,222::resourceManager::538::ResourceManager::(releaseResource)
Trying to release resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283'
Thread-1540::DEBUG::2012-10-30
09:56:09,223::resourceManager::553::ResourceManager::(releaseResource)
Released resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' (0
active users)
Thread-1540::DEBUG::2012-10-30
09:56:09,223::resourceManager::558::ResourceManager::(releaseResource)
Resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' is free,
finding out if anyone is waiting for it.
Thread-1540::DEBUG::2012-10-30
09:56:09,223::resourceManager::565::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283', Clearing records.
Thread-1540::DEBUG::2012-10-30
09:56:09,223::task::978::TaskManager.Task::(_decref)
Task=`f91ddc64-360b-4f33-bb53-9c6436a6a6d4`::ref 0 aborting False
Thread-1540::INFO::2012-10-30
09:56:09,224::clientIF::274::vds::(prepareVolumePath) prepared volume
path: /rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/65f95655-5bbe-47e4-8d3a-6e2ffc5c7761/images/11111111-1111-1111-1111-111111111111/Fedora-17-x86_64-DVD.iso
Thread-1540::DEBUG::2012-10-30
09:56:09,224::task::588::TaskManager.Task::(_updateState)
Task=`c1701ef5-96d8-4036-a5cb-7a285152d84f`::moving from state init ->
state preparing
Thread-1540::INFO::2012-10-30
09:56:09,224::logUtils::37::dispatcher::(wrapper) Run and protect:
prepareImage(sdUUID='e552b5fd-30fb-4e18-8bc4-26517f11a283',
spUUID='640fa36c-1da8-11e2-86f7-1078d2e9dece',
imgUUID='025d5960-23db-4e07-98c8-4c9682155eb1',
volUUID='12413fe4-a694-439b-95d8-40d5a576dada')
Thread-1540::DEBUG::2012-10-30
09:56:09,225::resourceManager::175::ResourceManager.Request::(__init__)
ResName=`Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283`ReqID=`1fd9533e-6b8a-458d-a4d2-001f0cc93ac2`::Request
was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at
'registerResource'
Thread-1540::DEBUG::2012-10-30
09:56:09,225::resourceManager::486::ResourceManager::(registerResource)
Trying to register resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' for lock type 'shared'
Thread-1540::DEBUG::2012-10-30
09:56:09,225::resourceManager::528::ResourceManager::(registerResource)
Resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' is free. Now
locking as 'shared' (1 active user)
Thread-1540::DEBUG::2012-10-30
09:56:09,225::resourceManager::212::ResourceManager.Request::(grant)
ResName=`Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283`ReqID=`1fd9533e-6b8a-458d-a4d2-001f0cc93ac2`::Granted
request
Thread-1540::DEBUG::2012-10-30
09:56:09,226::task::817::TaskManager.Task::(resourceAcquired)
Task=`c1701ef5-96d8-4036-a5cb-7a285152d84f`::_resourcesAcquired:
Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283 (shared)
Thread-1540::DEBUG::2012-10-30
09:56:09,226::task::978::TaskManager.Task::(_decref)
Task=`c1701ef5-96d8-4036-a5cb-7a285152d84f`::ref 1 aborting False
Thread-1540::DEBUG::2012-10-30
09:56:09,227::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for 12413fe4-a694-439b-95d8-40d5a576dada
Thread-1540::INFO::2012-10-30
09:56:09,234::image::357::Storage.Image::(getChain)
sdUUID=e552b5fd-30fb-4e18-8bc4-26517f11a283
imgUUID=025d5960-23db-4e07-98c8-4c9682155eb1
chain=[<storage.fileVolume.FileVolume instance at 0x7f82442cb0e0>]
Thread-1540::INFO::2012-10-30
09:56:09,237::logUtils::39::dispatcher::(wrapper) Run and protect:
prepareImage, Return response: {'path':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/e552b5fd-30fb-4e18-8bc4-26517f11a283/images/025d5960-23db-4e07-98c8-4c9682155eb1/12413fe4-a694-439b-95d8-40d5a576dada',
'chain': [{'path':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/e552b5fd-30fb-4e18-8bc4-26517f11a283/images/025d5960-23db-4e07-98c8-4c9682155eb1/12413fe4-a694-439b-95d8-40d5a576dada',
'domainID': 'e552b5fd-30fb-4e18-8bc4-26517f11a283', 'volumeID':
'12413fe4-a694-439b-95d8-40d5a576dada', 'imageID':
'025d5960-23db-4e07-98c8-4c9682155eb1'}]}
Thread-1540::DEBUG::2012-10-30
09:56:09,237::task::1172::TaskManager.Task::(prepare)
Task=`c1701ef5-96d8-4036-a5cb-7a285152d84f`::finished: {'path':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/e552b5fd-30fb-4e18-8bc4-26517f11a283/images/025d5960-23db-4e07-98c8-4c9682155eb1/12413fe4-a694-439b-95d8-40d5a576dada',
'chain': [{'path':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/e552b5fd-30fb-4e18-8bc4-26517f11a283/images/025d5960-23db-4e07-98c8-4c9682155eb1/12413fe4-a694-439b-95d8-40d5a576dada',
'domainID': 'e552b5fd-30fb-4e18-8bc4-26517f11a283', 'volumeID':
'12413fe4-a694-439b-95d8-40d5a576dada', 'imageID':
'025d5960-23db-4e07-98c8-4c9682155eb1'}]}
Thread-1540::DEBUG::2012-10-30
09:56:09,237::task::588::TaskManager.Task::(_updateState)
Task=`c1701ef5-96d8-4036-a5cb-7a285152d84f`::moving from state
preparing -> state finished
Thread-1540::DEBUG::2012-10-30
09:56:09,237::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283': < ResourceRef
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283', isValid: 'True' obj:
'None'>}
Thread-1540::DEBUG::2012-10-30
09:56:09,237::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1540::DEBUG::2012-10-30
09:56:09,238::resourceManager::538::ResourceManager::(releaseResource)
Trying to release resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283'
Thread-1540::DEBUG::2012-10-30
09:56:09,238::resourceManager::553::ResourceManager::(releaseResource)
Released resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' (0
active users)
Thread-1540::DEBUG::2012-10-30
09:56:09,238::resourceManager::558::ResourceManager::(releaseResource)
Resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' is free,
finding out if anyone is waiting for it.
Thread-1540::DEBUG::2012-10-30
09:56:09,238::resourceManager::565::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283', Clearing records.
Thread-1540::DEBUG::2012-10-30
09:56:09,239::task::978::TaskManager.Task::(_decref)
Task=`c1701ef5-96d8-4036-a5cb-7a285152d84f`::ref 0 aborting False
Thread-1540::INFO::2012-10-30
09:56:09,239::clientIF::274::vds::(prepareVolumePath) prepared volume
path: /rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/e552b5fd-30fb-4e18-8bc4-26517f11a283/images/025d5960-23db-4e07-98c8-4c9682155eb1/12413fe4-a694-439b-95d8-40d5a576dada
Thread-1540::DEBUG::2012-10-30
09:56:09,249::__init__::1249::Storage.Misc.excCmd::(_log)
'/usr/libexec/vdsm/hooks/before_vm_start/50_hugepages' (cwd None)
Thread-1540::DEBUG::2012-10-30
09:56:09,375::__init__::1249::Storage.Misc.excCmd::(_log) SUCCESS:
<err> = ''; <rc> = 0
Thread-1540::INFO::2012-10-30 09:56:09,376::hooks::72::root::(_runHooksDir)
Thread-1540::DEBUG::2012-10-30
09:56:09,376::libvirtvm::1338::vm.Vm::(_run)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::<?xml version="1.0"
encoding="utf-8"?>
<domain type="kvm">
<name>bsdlinux04vm</name>
<uuid>4abdba0d-aa1f-48f1-8160-3cde63165057</uuid>
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<vcpu>2</vcpu>
<devices>
<channel type="unix">
<target name="com.redhat.rhevm.vdsm" type="virtio"/>
<source mode="bind"
path="/var/lib/libvirt/qemu/channels/bsdlinux04vm.com.redhat.rhevm.vdsm"/>
</channel>
<input bus="ps2" type="mouse"/>
<channel type="spicevmc">
<target name="com.redhat.spice.0" type="virtio"/>
</channel>
<graphics autoport="yes" keymap="en-us" listen="0" passwd="*****"
passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1"
type="spice">
<channel mode="secure" name="main"/>
<channel mode="secure" name="inputs"/>
<channel mode="secure" name="cursor"/>
<channel mode="secure" name="playback"/>
<channel mode="secure" name="record"/>
<channel mode="secure" name="display"/>
</graphics>
<console type="pty">
<target port="0" type="virtio"/>
</console>
<video>
<model heads="1" type="qxl" vram="65536"/>
</video>
<interface type="bridge">
<mac address="00:1a:4a:70:83:00"/>
<model type="virtio"/>
<source bridge="ovirtmgmt"/>
</interface>
<memballoon model="virtio"/>
<disk device="cdrom" snapshot="no" type="file">
<source file="/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/65f95655-5bbe-47e4-8d3a-6e2ffc5c7761/images/11111111-1111-1111-1111-111111111111/Fedora-17-x86_64-DVD.iso"
startupPolicy="optional"/>
<target bus="ide" dev="hdc"/>
<readonly/>
<serial></serial>
<boot order="1"/>
</disk>
<disk device="disk" snapshot="no" type="file">
<source file="/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/e552b5fd-30fb-4e18-8bc4-26517f11a283/images/025d5960-23db-4e07-98c8-4c9682155eb1/12413fe4-a694-439b-95d8-40d5a576dada"/>
<target bus="virtio" dev="vda"/>
<serial>025d5960-23db-4e07-98c8-4c9682155eb1</serial>
<driver cache="writeback" error_policy="stop" io="threads"
name="qemu" type="raw"/>
</disk>
</devices>
<os>
<type arch="x86_64" machine="pc-0.14">hvm</type>
<smbios mode="sysinfo"/>
</os>
<sysinfo type="smbios">
<system>
<entry name="manufacturer">Red Hat</entry>
<entry name="product">RHEV Hypervisor</entry>
<entry name="version">17-1</entry>
<entry name="serial">4C4C4544-0059-4810-8034-B7C04F463253_d4:ae:52:a3:1c:1e</entry>
<entry name="uuid">4abdba0d-aa1f-48f1-8160-3cde63165057</entry>
</system>
</sysinfo>
<clock adjustment="0" offset="variable">
<timer name="rtc" tickpolicy="catchup"/>
</clock>
<features>
<acpi/>
</features>
<cpu match="exact">
<model>Conroe</model>
<topology cores="2" sockets="1" threads="1"/>
</cpu>
</domain>
libvirtEventLoop::DEBUG::2012-10-30
09:56:10,075::libvirtvm::2409::vm.Vm::(_onLibvirtLifecycleEvent)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::event Started detail 0
opaque None
Thread-1540::DEBUG::2012-10-30
09:56:10,188::utils::329::vm.Vm::(start)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::Start statistics
collection
Thread-1542::DEBUG::2012-10-30 09:56:10,188::utils::358::vm.Vm::(run)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::Stats thread started
Thread-1542::DEBUG::2012-10-30 09:56:10,189::task::588::TaskManager.Task::(
_updateState) Task=`6a004d5d-63a9-4ffb-a887-8f7d5c6179c7`::moving from
state init -> state preparing
Thread-1542::INFO::2012-10-30
09:56:10,190::logUtils::37::dispatcher::(wrapper) Run and protect:
getVolumeSize(sdUUID='e552b5fd-30fb-4e18-8bc4-26517f11a283',
spUUID='640fa36c-1da8-11e2-86f7-1078d2e9dece',
imgUUID='025d5960-23db-4e07-98c8-4c9682155eb1',
volUUID='12413fe4-a694-439b-95d8-40d5a576dada', options=None)
Thread-1542::DEBUG::2012-10-30
09:56:10,191::resourceManager::175::ResourceManager.Request::(__init__)
ResName=`Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283`ReqID=`057d2200-a704-4de4-a45b-2771d72f30bb`::Request
was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at
'registerResource'
Thread-1542::DEBUG::2012-10-30
09:56:10,191::resourceManager::486::ResourceManager::(registerResource)
Trying to register resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' for lock type 'shared'
Thread-1540::DEBUG::2012-10-30
09:56:10,192::vmChannels::144::vds::(register) Add fileno 18 to
listener's channels.
Thread-1542::DEBUG::2012-10-30
09:56:10,192::resourceManager::528::ResourceManager::(registerResource)
Resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' is free. Now
locking as 'shared' (1 active user)
Thread-1542::DEBUG::2012-10-30
09:56:10,193::resourceManager::212::ResourceManager.Request::(grant)
ResName=`Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283`ReqID=`057d2200-a704-4de4-a45b-2771d72f30bb`::Granted
request
Thread-1542::DEBUG::2012-10-30
09:56:10,193::task::817::TaskManager.Task::(resourceAcquired)
Task=`6a004d5d-63a9-4ffb-a887-8f7d5c6179c7`::_resourcesAcquired:
Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283 (shared)
Thread-1542::DEBUG::2012-10-30
09:56:10,193::task::978::TaskManager.Task::(_decref)
Task=`6a004d5d-63a9-4ffb-a887-8f7d5c6179c7`::ref 1 aborting False
Thread-1542::DEBUG::2012-10-30
09:56:10,195::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for 12413fe4-a694-439b-95d8-40d5a576dada
Thread-1540::WARNING::2012-10-30
09:56:10,196::libvirtvm::1547::vm.Vm::(_readPauseCode)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::_readPauseCode
unsupported by libvirt vm
Thread-1542::DEBUG::2012-10-30
09:56:10,199::fileVolume::535::Storage.Volume::(validateVolumePath)
validate path for 12413fe4-a694-439b-95d8-40d5a576dada
Thread-1542::INFO::2012-10-30
09:56:10,200::logUtils::39::dispatcher::(wrapper) Run and protect:
getVolumeSize, Return response: {'truesize': '0', 'apparentsize':
'8589934592'}
Thread-1542::DEBUG::2012-10-30
09:56:10,200::task::1172::TaskManager.Task::(prepare)
Task=`6a004d5d-63a9-4ffb-a887-8f7d5c6179c7`::finished: {'truesize':
'0', 'apparentsize': '8589934592'}
Thread-1542::DEBUG::2012-10-30
09:56:10,201::task::588::TaskManager.Task::(_updateState)
Task=`6a004d5d-63a9-4ffb-a887-8f7d5c6179c7`::moving from state
preparing -> state finished
Thread-1542::DEBUG::2012-10-30
09:56:10,201::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283': < ResourceRef
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283', isValid: 'True' obj:
'None'>}
Thread-1542::DEBUG::2012-10-30
09:56:10,201::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1542::DEBUG::2012-10-30
09:56:10,201::resourceManager::538::ResourceManager::(releaseResource)
Trying to release resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283'
Thread-1542::DEBUG::2012-10-30
09:56:10,202::resourceManager::553::ResourceManager::(releaseResource)
Released resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' (0
active users)
Thread-1542::DEBUG::2012-10-30
09:56:10,202::resourceManager::558::ResourceManager::(releaseResource)
Resource 'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283' is free,
finding out if anyone is waiting for it.
Thread-1542::DEBUG::2012-10-30
09:56:10,203::resourceManager::565::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.e552b5fd-30fb-4e18-8bc4-26517f11a283', Clearing records.
Thread-1542::DEBUG::2012-10-30
09:56:10,203::task::978::TaskManager.Task::(_decref)
Task=`6a004d5d-63a9-4ffb-a887-8f7d5c6179c7`::ref 0 aborting False
Thread-1540::DEBUG::2012-10-30
09:56:10,206::__init__::1249::Storage.Misc.excCmd::(_log)
'/usr/bin/sudo -n /usr/sbin/service ksmtuned retune' (cwd None)
Thread-1540::DEBUG::2012-10-30
09:56:10,241::__init__::1249::Storage.Misc.excCmd::(_log) FAILED:
<err> = "Redirecting to /bin/systemctl retune
ksmtuned.service\nUnknown operation 'retune'.\n"; <rc> = 1
Thread-1540::DEBUG::2012-10-30 09:56:10,242::vm::580::vm.Vm::(_startUnde
rlyingVm) vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::_ongoingCreations
released
VM Channels Listener::DEBUG::2012-10-30
09:56:10,376::vmChannels::71::vds::(_do_add_channels) fileno 18 was
added to unconnected channels.
VM Channels Listener::DEBUG::2012-10-30
09:56:10,376::vmChannels::103::vds::(_handle_unconnected) Trying to
connect fileno 18.
VM Channels Listener::DEBUG::2012-10-30
09:56:10,376::guestIF::81::vm.Vm::(_connect)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::Attempting connection to
/var/lib/libvirt/qemu/channels/bsdlinux04vm.com.redhat.rhevm.vdsm
VM Channels Listener::DEBUG::2012-10-30
09:56:10,377::guestIF::83::vm.Vm::(_connect)
vmId=`4abdba0d-aa1f-48f1-8160-3cde63165057`::Connected to
/var/lib/libvirt/qemu/channels/bsdlinux04vm.com.redhat.rhevm.vdsm
VM Channels Listener::DEBUG::2012-10-30
09:56:10,377::vmChannels::106::vds::(_handle_unconnected) Connect
fileno 18 was succeed.
Thread-1543::DEBUG::2012-10-30
09:56:14,759::BindingXMLRPC::156::vds::(wrapper) [192.168.1.10]
Thread-1543::DEBUG::2012-10-30
09:56:14,760::task::588::TaskManager.Task::(_updateState)
Task=`3379e8b3-bf70-49d0-b643-0bb17de93846`::moving from state init ->
state preparing
Thread-1543::INFO::2012-10-30
09:56:14,760::logUtils::37::dispatcher::(wrapper) Run and protect:
getSpmStatus(spUUID='640fa36c-1da8-11e2-86f7-1078d2e9dece',
options=None)
Thread-1543::INFO::2012-10-30
09:56:14,761::logUtils::39::dispatcher::(wrapper) Run and protect:
getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus':
'SPM', 'spmLver': 1}}
Thread-1543::DEBUG::2012-10-30
09:56:14,761::task::1172::TaskManager.Task::(prepare)
Task=`3379e8b3-bf70-49d0-b643-0bb17de93846`::finished: {'spm_st':
{'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}}
Thread-1543::DEBUG::2012-10-30
09:56:14,761::task::588::TaskManager.Task::(_updateState)
Task=`3379e8b3-bf70-49d0-b643-0bb17de93846`::moving from state
preparing -> state finished
Thread-1543::DEBUG::2012-10-30
09:56:14,761::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-1543::DEBUG::2012-10-30
09:56:14,762::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1543::DEBUG::2012-10-30
09:56:14,762::task::978::TaskManager.Task::(_decref)
Task=`3379e8b3-bf70-49d0-b643-0bb17de93846`::ref 0 aborting False
Thread-1544::DEBUG::2012-10-30
09:56:14,781::BindingXMLRPC::156::vds::(wrapper) [192.168.1.10]
Thread-1544::DEBUG::2012-10-30
09:56:14,782::task::588::TaskManager.Task::(_updateState)
Task=`f7a0fd20-fddb-48f7-832e-ed230b49d4f0`::moving from state init ->
state preparing
Thread-1544::INFO::2012-10-30
09:56:14,782::logUtils::37::dispatcher::(wrapper) Run and protect:
getStoragePoolInfo(spUUID='640fa36c-1da8-11e2-86f7-1078d2e9dece',
options=None)
Thread-1544::DEBUG::2012-10-30
09:56:14,782::resourceManager::175::ResourceManager.Request::(__init__)
ResName=`Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece`ReqID=`7e21374b-d582-451f-b37a-ee1ea2f206f1`::Request
was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at
'registerResource'
Thread-1544::DEBUG::2012-10-30
09:56:14,783::resourceManager::486::ResourceManager::(registerResource)
Trying to register resource
'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece' for lock type 'shared'
Thread-1544::DEBUG::2012-10-30
09:56:14,783::resourceManager::528::ResourceManager::(registerResource)
Resource 'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece' is free. Now
locking as 'shared' (1 active user)
Thread-1544::DEBUG::2012-10-30
09:56:14,783::resourceManager::212::ResourceManager.Request::(grant)
ResName=`Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece`ReqID=`7e21374b-d582-451f-b37a-ee1ea2f206f1`::Granted
request
Thread-1544::DEBUG::2012-10-30
09:56:14,784::task::817::TaskManager.Task::(resourceAcquired)
Task=`f7a0fd20-fddb-48f7-832e-ed230b49d4f0`::_resourcesAcquired:
Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece (shared)
Thread-1544::DEBUG::2012-10-30
09:56:14,784::task::978::TaskManager.Task::(_decref)
Task=`f7a0fd20-fddb-48f7-832e-ed230b49d4f0`::ref 1 aborting False
Thread-1544::INFO::2012-10-30
09:56:14,793::logUtils::39::dispatcher::(wrapper) Run and protect:
getStoragePoolInfo, Return response: {'info': {'spm_id': 1,
'master_uuid': 'e552b5fd-30fb-4e18-8bc4-26517f11a283', 'name':
'Default', 'version': '0', 'domains':
'65f95655-5bbe-47e4-8d3a-6e2ffc5c7761:Active,e552b5fd-30fb-4e18-8bc4-26517f11a283:Active',
'pool_status': 'connected', 'isoprefix':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/65f95655-5bbe-47e4-8d3a-6e2ffc5c7761/images/11111111-1111-1111-1111-111111111111',
'type': 'SHAREDFS', 'master_ver': 1, 'lver': 1}, 'dominfo':
{'65f95655-5bbe-47e4-8d3a-6e2ffc5c7761': {'status': 'Active',
'diskfree': '382827757568', 'alerts': [], 'disktotal':
'424540110848'}, 'e552b5fd-30fb-4e18-8bc4-26517f11a283': {'status':
'Active', 'diskfree': '2099627163648', 'alerts': [], 'disktotal':
'2212208836608'}}}
Thread-1544::DEBUG::2012-10-30
09:56:14,793::task::1172::TaskManager.Task::(prepare)
Task=`f7a0fd20-fddb-48f7-832e-ed230b49d4f0`::finished: {'info':
{'spm_id': 1, 'master_uuid': 'e552b5fd-30fb-4e18-8bc4-26517f11a283',
'name': 'Default', 'version': '0', 'domains':
'65f95655-5bbe-47e4-8d3a-6e2ffc5c7761:Active,e552b5fd-30fb-4e18-8bc4-26517f11a283:Active',
'pool_status': 'connected', 'isoprefix':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/65f95655-5bbe-47e4-8d3a-6e2ffc5c7761/images/11111111-1111-1111-1111-111111111111',
'type': 'SHAREDFS', 'master_ver': 1, 'lver': 1}, 'dominfo':
{'65f95655-5bbe-47e4-8d3a-6e2ffc5c7761': {'status': 'Active',
'diskfree': '382827757568', 'alerts': [], 'disktotal':
'424540110848'}, 'e552b5fd-30fb-4e18-8bc4-26517f11a283': {'status':
'Active', 'diskfree': '2099627163648', 'alerts': [], 'disktotal':
'2212208836608'}}}
Thread-1544::DEBUG::2012-10-30
09:56:14,794::task::588::TaskManager.Task::(_updateState)
Task=`f7a0fd20-fddb-48f7-832e-ed230b49d4f0`::moving from state
preparing -> state finished
Thread-1544::DEBUG::2012-10-30
09:56:14,794::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece': < ResourceRef
'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece', isValid: 'True' obj:
'None'>}
Thread-1544::DEBUG::2012-10-30
09:56:14,794::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1544::DEBUG::2012-10-30
09:56:14,794::resourceManager::538::ResourceManager::(releaseResource)
Trying to release resource
'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece'
Thread-1544::DEBUG::2012-10-30
09:56:14,795::resourceManager::553::ResourceManager::(releaseResource)
Released resource 'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece' (0
active users)
Thread-1544::DEBUG::2012-10-30
09:56:14,795::resourceManager::558::ResourceManager::(releaseResource)
Resource 'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece' is free,
finding out if anyone is waiting for it.
Thread-1544::DEBUG::2012-10-30
09:56:14,795::resourceManager::565::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece', Clearing records.
Thread-1544::DEBUG::2012-10-30
09:56:14,795::task::978::TaskManager.Task::(_decref)
Task=`f7a0fd20-fddb-48f7-832e-ed230b49d4f0`::ref 0 aborting False
Thread-1545::DEBUG::2012-10-30
09:56:24,820::BindingXMLRPC::156::vds::(wrapper) [192.168.1.10]
Thread-1545::DEBUG::2012-10-30
09:56:24,820::task::588::TaskManager.Task::(_updateState)
Task=`911438a8-fbf0-456f-b979-9e361bda85f2`::moving from state init ->
state preparing
Thread-1545::INFO::2012-10-30
09:56:24,821::logUtils::37::dispatcher::(wrapper) Run and protect:
getSpmStatus(spUUID='640fa36c-1da8-11e2-86f7-1078d2e9dece',
options=None)
Thread-1545::INFO::2012-10-30
09:56:24,821::logUtils::39::dispatcher::(wrapper) Run and protect:
getSpmStatus, Return response: {'spm_st': {'spmId': 1, 'spmStatus':
'SPM', 'spmLver': 1}}
Thread-1545::DEBUG::2012-10-30
09:56:24,821::task::1172::TaskManager.Task::(prepare)
Task=`911438a8-fbf0-456f-b979-9e361bda85f2`::finished: {'spm_st':
{'spmId': 1, 'spmStatus': 'SPM', 'spmLver': 1}}
Thread-1545::DEBUG::2012-10-30
09:56:24,822::task::588::TaskManager.Task::(_updateState)
Task=`911438a8-fbf0-456f-b979-9e361bda85f2`::moving from state
preparing -> state finished
Thread-1545::DEBUG::2012-10-30
09:56:24,822::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-1545::DEBUG::2012-10-30
09:56:24,822::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1545::DEBUG::2012-10-30
09:56:24,822::task::978::TaskManager.Task::(_decref)
Task=`911438a8-fbf0-456f-b979-9e361bda85f2`::ref 0 aborting False
Thread-1546::DEBUG::2012-10-30
09:56:24,840::BindingXMLRPC::156::vds::(wrapper) [192.168.1.10]
Thread-1546::DEBUG::2012-10-30
09:56:24,841::task::588::TaskManager.Task::(_updateState)
Task=`fbf6f12b-58f8-45bf-b34f-5fa3ae93123c`::moving from state init ->
state preparing
Thread-1546::INFO::2012-10-30
09:56:24,841::logUtils::37::dispatcher::(wrapper) Run and protect:
getStoragePoolInfo(spUUID='640fa36c-1da8-11e2-86f7-1078d2e9dece',
options=None)
Thread-1546::DEBUG::2012-10-30
09:56:24,841::resourceManager::175::ResourceManager.Request::(__init__)
ResName=`Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece`ReqID=`c8c2813e-c44c-44f2-b1dd-0f797ecadce3`::Request
was made in '/usr/share/vdsm/storage/resourceManager.py' line '485' at
'registerResource'
Thread-1546::DEBUG::2012-10-30
09:56:24,842::resourceManager::486::ResourceManager::(registerResource)
Trying to register resource
'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece' for lock type 'shared'
Thread-1546::DEBUG::2012-10-30
09:56:24,842::resourceManager::528::ResourceManager::(registerResource)
Resource 'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece' is free. Now
locking as 'shared' (1 active user)
Thread-1546::DEBUG::2012-10-30
09:56:24,842::resourceManager::212::ResourceManager.Request::(grant)
ResName=`Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece`ReqID=`c8c2813e-c44c-44f2-b1dd-0f797ecadce3`::Granted
request
Thread-1546::DEBUG::2012-10-30
09:56:24,843::task::817::TaskManager.Task::(resourceAcquired)
Task=`fbf6f12b-58f8-45bf-b34f-5fa3ae93123c`::_resourcesAcquired:
Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece (shared)
Thread-1546::DEBUG::2012-10-30
09:56:24,843::task::978::TaskManager.Task::(_decref)
Task=`fbf6f12b-58f8-45bf-b34f-5fa3ae93123c`::ref 1 aborting False
Thread-1546::INFO::2012-10-30
09:56:24,852::logUtils::39::dispatcher::(wrapper) Run and protect:
getStoragePoolInfo, Return response: {'info': {'spm_id': 1,
'master_uuid': 'e552b5fd-30fb-4e18-8bc4-26517f11a283', 'name':
'Default', 'version': '0', 'domains':
'65f95655-5bbe-47e4-8d3a-6e2ffc5c7761:Active,e552b5fd-30fb-4e18-8bc4-26517f11a283:Active',
'pool_status': 'connected', 'isoprefix':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/65f95655-5bbe-47e4-8d3a-6e2ffc5c7761/images/11111111-1111-1111-1111-111111111111',
'type': 'SHAREDFS', 'master_ver': 1, 'lver': 1}, 'dominfo':
{'65f95655-5bbe-47e4-8d3a-6e2ffc5c7761': {'status': 'Active',
'diskfree': '382827757568', 'alerts': [], 'disktotal':
'424540110848'}, 'e552b5fd-30fb-4e18-8bc4-26517f11a283': {'status':
'Active', 'diskfree': '2099627163648', 'alerts': [], 'disktotal':
'2212208836608'}}}
Thread-1546::DEBUG::2012-10-30
09:56:24,853::task::1172::TaskManager.Task::(prepare)
Task=`fbf6f12b-58f8-45bf-b34f-5fa3ae93123c`::finished: {'info':
{'spm_id': 1, 'master_uuid': 'e552b5fd-30fb-4e18-8bc4-26517f11a283',
'name': 'Default', 'version': '0', 'domains':
'65f95655-5bbe-47e4-8d3a-6e2ffc5c7761:Active,e552b5fd-30fb-4e18-8bc4-26517f11a283:Active',
'pool_status': 'connected', 'isoprefix':
'/rhev/data-center/640fa36c-1da8-11e2-86f7-1078d2e9dece/65f95655-5bbe-47e4-8d3a-6e2ffc5c7761/images/11111111-1111-1111-1111-111111111111',
'type': 'SHAREDFS', 'master_ver': 1, 'lver': 1}, 'dominfo':
{'65f95655-5bbe-47e4-8d3a-6e2ffc5c7761': {'status': 'Active',
'diskfree': '382827757568', 'alerts': [], 'disktotal':
'424540110848'}, 'e552b5fd-30fb-4e18-8bc4-26517f11a283': {'status':
'Active', 'diskfree': '2099627163648', 'alerts': [], 'disktotal':
'2212208836608'}}}
Thread-1546::DEBUG::2012-10-30
09:56:24,853::task::588::TaskManager.Task::(_updateState)
Task=`fbf6f12b-58f8-45bf-b34f-5fa3ae93123c`::moving from state
preparing -> state finished
Thread-1546::DEBUG::2012-10-30
09:56:24,853::resourceManager::809::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources
{'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece': < ResourceRef
'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece', isValid: 'True' obj:
'None'>}
Thread-1546::DEBUG::2012-10-30
09:56:24,853::resourceManager::844::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-1546::DEBUG::2012-10-30
09:56:24,854::resourceManager::538::ResourceManager::(releaseResource)
Trying to release resource
'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece'
Thread-1546::DEBUG::2012-10-30
09:56:24,854::resourceManager::553::ResourceManager::(releaseResource)
Released resource 'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece' (0
active users)
Thread-1546::DEBUG::2012-10-30
09:56:24,854::resourceManager::558::ResourceManager::(releaseResource)
Resource 'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece' is free,
finding out if anyone is waiting for it.
Thread-1546::DEBUG::2012-10-30
09:56:24,854::resourceManager::565::ResourceManager::(releaseResource)
No one is waiting for resource
'Storage.640fa36c-1da8-11e2-86f7-1078d2e9dece', Clearing records.
Thread-1546::DEBUG::2012-10-30
09:56:24,855::task::978::TaskManager.Task::(_decref)
Task=`fbf6f12b-58f8-45bf-b34f-5fa3ae93123c`::ref 0 aborting False
Regards
Daniel Rowe
12 years, 1 month
[Users] ovirt-iso-uploader cannot upload iso image
by sirin
--Apple-Mail=_AFA60E9F-260D-4D10-94DA-61602E8C1227
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
charset=us-ascii
Hi all,
~]# engine-iso-uploader list
Please provide the REST API password for the admin@internal oVirt Engine =
user (CTRL+D to abort):=20
ERROR: [ERROR]::ca_file (CA certificate) must be specified for SSL =
connection.
INFO: Use the -h option to see usage
~]# engine-iso-uploader -i iso upload CentOS-6.3-x86_64-minimal.iso=20
Please provide the REST API password for the admin@internal oVirt Engine =
user (CTRL+D to abort):=20
ERROR: [ERROR]::ca_file (CA certificate) must be specified for SSL =
connection.
INFO: Use the -h option to see usage.
i found this http://www.mail-archive.com/users@ovirt.org/msg03325.html , =
but no resolve my problem (
list installed rpm
~]# rpm -qa | grep ovi*
ovirt-engine-cli-3.2.0.5-1.el6.noarch
ovirt-log-collector-3.1.0-16.el6.noarch
ovirt-engine-jbossas711-1-0.x86_64
ovirt-image-uploader-3.1.0-16.el6.noarch
ovirt-engine-setup-3.1.0-3.19.el6.noarch
ovirt-engine-genericapi-3.1.0-3.19.el6.noarch
ovirt-engine-notification-service-3.1.0-3.19.el6.noarch
ovirt-engine-backend-3.1.0-3.19.el6.noarch
ovirt-engine-webadmin-portal-3.1.0-3.19.el6.noarch
ovirt-engine-3.1.0-3.19.el6.noarch
ovirt-engine-sdk-3.2.0.2-1.el6.noarch
ovirt-engine-dbscripts-3.1.0-3.19.el6.noarch
ovirt-engine-userportal-3.1.0-3.19.el6.noarch
ovirt-engine-tools-common-3.1.0-3.19.el6.noarch
ovirt-iso-uploader-3.1.0-16.el6.noarch
ovirt-engine-restapi-3.1.0-3.19.el6.noarch
ovirt-engine-config-3.1.0-3.19.el6.noarch
--Apple-Mail=_AFA60E9F-260D-4D10-94DA-61602E8C1227
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
charset=us-ascii
<html><head><meta http-equiv=3D"Content-Type" content=3D"text/html =
charset=3Dus-ascii"></head><body style=3D"word-wrap: break-word; =
-webkit-nbsp-mode: space; -webkit-line-break: after-white-space; =
"><div><br></div><div>Hi all,</div><div><br></div><div><div>~]# =
engine-iso-uploader list</div><div>Please provide the REST API password =
for the admin@internal oVirt Engine user (CTRL+D to =
abort): </div><div>ERROR: [ERROR]::ca_file (CA certificate) must be =
specified for SSL connection.</div><div>INFO: Use the -h option to see =
usage</div></div><div><br></div><div><br></div><div><div>~]# =
engine-iso-uploader -i iso upload =
CentOS-6.3-x86_64-minimal.iso </div><div>Please provide the REST =
API password for the admin@internal oVirt Engine user (CTRL+D to =
abort): </div><div>ERROR: [ERROR]::ca_file (CA certificate) must be =
specified for SSL connection.</div><div>INFO: Use the -h option to see =
usage.</div></div><div><br></div><div>i found this <a =
href=3D"http://www.mail-archive.com/users@ovirt.org/msg03325.html">http://=
www.mail-archive.com/users@ovirt.org/msg03325.html</a> , but no =
resolve my problem (</div><div><br></div><div>list installed =
rpm</div><div><br></div><div><div>~]# rpm -qa | grep =
ovi*</div><div>ovirt-engine-cli-3.2.0.5-1.el6.noarch</div><div>ovirt-log-c=
ollector-3.1.0-16.el6.noarch</div><div>ovirt-engine-jbossas711-1-0.x86_64<=
/div><div>ovirt-image-uploader-3.1.0-16.el6.noarch</div><div>ovirt-engine-=
setup-3.1.0-3.19.el6.noarch</div><div>ovirt-engine-genericapi-3.1.0-3.19.e=
l6.noarch</div><div>ovirt-engine-notification-service-3.1.0-3.19.el6.noarc=
h</div><div>ovirt-engine-backend-3.1.0-3.19.el6.noarch</div><div>ovirt-eng=
ine-webadmin-portal-3.1.0-3.19.el6.noarch</div><div>ovirt-engine-3.1.0-3.1=
9.el6.noarch</div><div>ovirt-engine-sdk-3.2.0.2-1.el6.noarch</div><div>ovi=
rt-engine-dbscripts-3.1.0-3.19.el6.noarch</div><div>ovirt-engine-userporta=
l-3.1.0-3.19.el6.noarch</div><div>ovirt-engine-tools-common-3.1.0-3.19.el6=
.noarch</div><div>ovirt-iso-uploader-3.1.0-16.el6.noarch</div><div>ovirt-e=
ngine-restapi-3.1.0-3.19.el6.noarch</div><div>ovirt-engine-config-3.1.0-3.=
19.el6.noarch</div></div><div><br></div><div><br></div></body></html>=
--Apple-Mail=_AFA60E9F-260D-4D10-94DA-61602E8C1227--
12 years, 1 month
[Users] oVirt at LinuxCon Europe Next Week
by Jason Brooks
LinuxCon Europe 2012 is taking place next week in Barcelona, Spain,
alongside a cluster of related workshops and summits, several of which
will focus on the oVirt Project.
If you are planning on attending LinuxCon Europe next week, it should
provide a great opportunity to connect with other oVirt community
members in attendance. If you know others outside the community who plan
to be at LinuxCon Europe, and who may be interested in learning more
about oVirt, please pass along this information.
On Monday November 5, from 10:30am to 7:00pm, and continuing on Tuesday
from 10:30am to 4:30pm, there will be a Technology Showcase with booths
from LinuxCon Europe exhibitors. The oVirt Project will be in booth #22,
on the ground floor of the Hotel Fira Palace.
On Tuesday November 6, the oVirt Project's own Itamar Heim, from Red
Hat, will giving the talk, "Introduction to the oVirt Virtualization
Management Platform," from 2:15 pm to 3:10pm, at the Hotel Fira Palace,
in the Verdi room, as part of the CloudOpen Summit.
On Wednesday November 7, the oVirt Workshop will kick off at 9am in the
L'ARIA Restaurant, before covering an introduction to oVirt,
architecture and roadmap presentations, and deep dives into oVirt
networking and storage, leading up to a Hands-On Install & Play Lab from
3:30pm to 6:00pm.
Also on Wednesday, at the KVM Forum, there will be an oVirt Demo from
5:30pm to 6:00pm in the Ambar room.
On Thursday November 8, the oVirt Workshop resumes at 9:00am in the Rubi
room, with a full day of sessions devoted to oVirt integration topics.
Also on Thursday, at 11:15am, Red Hat's Vijay Bellur will be giving a
talk on "Integrating GlusterFS, oVirt and KVM" as part of the Gluster
Workshop, in the Vivaldi room.
On Friday November 9, the oVirt Workshop will begin its final day with a
brief keynote in the Ambar room, before proceeding with a day of
developer-focused topics back in the Rubi room.
For details on the oVirt Workshop agenda, see the schedule at
http://events.linuxfoundation.org/events/kvm-forum/schedule.
For information about LinuxCon Europe 2012, see event site at
http://events.linuxfoundation.org/events/linuxcon-europe.
12 years, 1 month