[Users] the storage domain can not activate automatically when blackout
by hackxay
--__=_Part_Boundary_007_009380.000418
Content-Type: text/plain;
charset="UTF-8"
Content-Transfer-Encoding: base64
SSBtZXQgc29tZSBwcm9ibGVtcyB3aGVuIGJsYWNrb3V0Lg0KDQpzb21lIGluZm8gaGVyZToNClRo
cmVhZC0yNzM1OjpFUlJPUjo6MjAxMy0xMC0xNCAxNDoxMjozNiwxNjk6OnRhc2s6OjgzMzo6VGFz
a01hbmFnZXIuVGFzazo6KF9zZXRFcnJvcikgVGFzaz1gMzViN2M0NDgtNzNiNy00NzhkLWFjZmMt
NDMyMWMwNmRlZmZmYDo6VW5leHBlY3RlZCBlcnJvcg0KVHJhY2ViYWNrIChtb3N0IHJlY2VudCBj
YWxsIGxhc3QpOg0KICBGaWxlICIvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS90YXNrLnB5IiwgbGlu
ZSA4NDAsIGluIF9ydW4NCiAgICByZXR1cm4gZm4oKmFyZ3MsICoqa2FyZ3MpDQogIEZpbGUgIi91
c3Ivc2hhcmUvdmRzbS9sb2dVdGlscy5weSIsIGxpbmUgMzgsIGluIHdyYXBwZXINCiAgICByZXMg
PSBmKCphcmdzLCAqKmt3YXJncykNCiAgRmlsZSAiL3Vzci9zaGFyZS92ZHNtL3N0b3JhZ2UvaHNt
LnB5IiwgbGluZSA2MDIsIGluIGdldFNwbVN0YXR1cw0KICAgIHBvb2wgPSBzZWxmLmdldFBvb2wo
c3BVVUlEKQ0KICBGaWxlICIvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS9oc20ucHkiLCBsaW5lIDMx
MiwgaW4gZ2V0UG9vbA0KICAgIHJhaXNlIHNlLlN0b3JhZ2VQb29sVW5rbm93bihzcFVVSUQpDQpT
dG9yYWdlUG9vbFVua25vd246IFVua25vd24gcG9vbCBpZCwgcG9vbCBub3QgY29ubmVjdGVkOiAo
JzNkZjdkNmJhLTc3NjYtNGVmZi1hMTYzLTViNzJkNDJhMjcwMicsKSANCg0KdGhhbmsgeW91IQ==
--__=_Part_Boundary_007_009380.000418
Content-Type: text/html;
charset="UTF-8"
Content-Transfer-Encoding: base64
PCFET0NUWVBFIEhUTUwgUFVCTElDICItLy9XM0MvL0RURCBIVE1MIDQuMCBUcmFuc2l0aW9uYWwv
L0VOIj4NCjxIVE1MPjxIRUFEPg0KPFNUWUxFIHR5cGU9dGV4dC9jc3M+IDwhLS1AaW1wb3J0IHVy
bChzY3JvbGxiYXIuY3NzKTsgLS0+PC9TVFlMRT4NCg0KPE1FVEEgY29udGVudD0idGV4dC9odG1s
OyBjaGFyc2V0PXV0Zi04IiBodHRwLWVxdWl2PUNvbnRlbnQtVHlwZT4NCjxTVFlMRT4JCQlCTE9D
S1FVT1RFe21hcmdpbi1Ub3A6IDBweDsgbWFyZ2luLUJvdHRvbTogMHB4OyBtYXJnaW4tTGVmdDog
MmVtfQkJCWJvZHl7Rk9OVC1TSVpFOjEyLjFwdDsgQ09MT1I6IzAwMTsgRk9OVC1GQU1JTFk65a6L
5L2TLHNlcmlmO30JCTwvU1RZTEU+DQoNCjxNRVRBIG5hbWU9R0VORVJBVE9SIGNvbnRlbnQ9Ik1T
SFRNTCAxMS4wMC45NjAwLjE2Mzg0Ij48QkFTRSANCnRhcmdldD1fYmxhbms+PC9IRUFEPg0KPEJP
RFkgDQpzdHlsZT0iQk9SREVSLUxFRlQtV0lEVEg6IDBweDsgQk9SREVSLVJJR0hULVdJRFRIOiAw
cHg7IEJPUkRFUi1CT1RUT00tV0lEVEg6IDBweDsgTUFSR0lOOiAxMnB4OyBMSU5FLUhFSUdIVDog
MS4zOyBCT1JERVItVE9QLVdJRFRIOiAwcHgiIA0KbWFyZ2luaGVpZ2h0PSIwIiBtYXJnaW53aWR0
aD0iMCI+DQo8RElWPjxGT05UIGNvbG9yPSMwMDAwMDAgc2l6ZT0yIGZhY2U9VmVyZGFuYT48U1BB
TiBpZD1fRmxhc2hTaWduTmFtZT5JIG1ldCBzb21lIA0KcHJvYmxlbXMgd2hlbiBibGFja291dC48
L1NQQU4+PC9GT05UPjwvRElWPg0KPERJVj48Rk9OVCBjb2xvcj0jMDAwMDAwIHNpemU9MiBmYWNl
PVZlcmRhbmE+PFNQQU4+PC9TUEFOPjwvRk9OVD4mbmJzcDs8L0RJVj4NCjxESVY+PEZPTlQgY29s
b3I9IzAwMDAwMCBzaXplPTIgZmFjZT1WZXJkYW5hPjxTUEFOPnNvbWUgaW5mbyANCmhlcmU6PC9T
UEFOPjwvRk9OVD48L0RJVj4NCjxESVY+PEZPTlQgc2l6ZT0yIGZhY2U9VmVyZGFuYT48U1BBTj48
IS0tU3RhcnRGcmFnbWVudCAtLT4NCjxESVY+PEZPTlQgDQpjb2xvcj0jMDAwMDAwPlRocmVhZC0y
NzM1OjpFUlJPUjo6MjAxMy0xMC0xNCZuYnNwOzE0OjEyOjM2LDE2OTo6dGFzazo6ODMzOjpUYXNr
TWFuYWdlci5UYXNrOjooX3NldEVycm9yKSZuYnNwO1Rhc2s9YDM1YjdjNDQ4LTczYjctNDc4ZC1h
Y2ZjLTQzMjFjMDZkZWZmZmA6OlVuZXhwZWN0ZWQmbmJzcDtlcnJvcjxCUj5UcmFjZWJhY2smbmJz
cDsobW9zdCZuYnNwO3JlY2VudCZuYnNwO2NhbGwmbmJzcDtsYXN0KTo8QlI+Jm5ic3A7Jm5ic3A7
RmlsZSZuYnNwOyIvdXNyL3NoYXJlL3Zkc20vc3RvcmFnZS90YXNrLnB5IiwmbmJzcDtsaW5lJm5i
c3A7ODQwLCZuYnNwO2luJm5ic3A7X3J1bjxCUj4mbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtyZXR1
cm4mbmJzcDtmbigqYXJncywmbmJzcDsqKmthcmdzKTxCUj4mbmJzcDsmbmJzcDtGaWxlJm5ic3A7
Ii91c3Ivc2hhcmUvdmRzbS9sb2dVdGlscy5weSIsJm5ic3A7bGluZSZuYnNwOzM4LCZuYnNwO2lu
Jm5ic3A7d3JhcHBlcjxCUj4mbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDtyZXMmbmJzcDs9Jm5ic3A7
ZigqYXJncywmbmJzcDsqKmt3YXJncyk8QlI+Jm5ic3A7Jm5ic3A7RmlsZSZuYnNwOyIvdXNyL3No
YXJlL3Zkc20vc3RvcmFnZS9oc20ucHkiLCZuYnNwO2xpbmUmbmJzcDs2MDIsJm5ic3A7aW4mbmJz
cDtnZXRTcG1TdGF0dXM8QlI+Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7cG9vbCZuYnNwOz0mbmJz
cDtzZWxmLmdldFBvb2woc3BVVUlEKTxCUj4mbmJzcDsmbmJzcDtGaWxlJm5ic3A7Ii91c3Ivc2hh
cmUvdmRzbS9zdG9yYWdlL2hzbS5weSIsJm5ic3A7bGluZSZuYnNwOzMxMiwmbmJzcDtpbiZuYnNw
O2dldFBvb2w8QlI+Jm5ic3A7Jm5ic3A7Jm5ic3A7Jm5ic3A7cmFpc2UmbmJzcDtzZS5TdG9yYWdl
UG9vbFVua25vd24oc3BVVUlEKTxCUj5TdG9yYWdlUG9vbFVua25vd246Jm5ic3A7VW5rbm93biZu
YnNwO3Bvb2wmbmJzcDtpZCwmbmJzcDtwb29sJm5ic3A7bm90Jm5ic3A7Y29ubmVjdGVkOiZuYnNw
OygnM2RmN2Q2YmEtNzc2Ni00ZWZmLWExNjMtNWI3MmQ0MmEyNzAyJywpIDwvRk9OVD48L0RJVj4N
CjxESVY+PEZPTlQgY29sb3I9IzAwMDAwMD48L0ZPTlQ+Jm5ic3A7PC9ESVY+DQo8RElWPjxGT05U
IGNvbG9yPSMwMDAwMDA+dGhhbmsgDQp5b3UhPC9GT05UPjwvRElWPjwvU1BBTj48L0ZPTlQ+PC9E
SVY+PC9TVEFUSU9ORVJZPjwvQk9EWT48L0hUTUw+
--__=_Part_Boundary_007_009380.000418--
11 years, 2 months
[Users] left space on storage domain
by Nathanaël Blanchet
This is a multi-part message in MIME format.
--------------090405060604020902090409
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
Hello,
I want to add a new 300 GB disk and ovirt tells me I have 382 Go left on
the storage domain. When doing this, it complains that there is no more
space disk left on the relevant storage.
Can this paramater be changed via engine-config?
--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
--------------090405060604020902090409
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font size="-1">Hello,<br>
<br>
I want to add a new </font><font size="-1"><font size="-1">300 GB
</font>disk and ovirt tells me I have 382 Go left on the storage
domain. When doing this, it complains that there is no more space
disk left on the relevant storage.<br>
Can this paramater be changed via engine-config?<br>
</font>
<pre class="moz-signature" cols="72">--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
<a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet(a)abes.fr</a> </pre>
</body>
</html>
--------------090405060604020902090409--
11 years, 2 months
[Users] Exports : many success, one failure
by Nicolas Ecarnot
Hi,
I was NFS-exporting many VMs from a 3.1 (iscsi) to a 3.3 (iscsi) oVirt
and it was running fine.
On a linux VM with 2 disks, I see the export process smoothly filling up
the NFS tree. When its seems to reach the data expected size, the web
gui tells me the export has failed.
The engine.log shows this, which does not help me much.
2013-10-14 16:23:12,603 ERROR [org.ovirt.engine.core.bll.SPMAsyncTask]
(QuartzScheduler_Worker-82) BaseAsyncTask::LogEndTaskFailure: Task
f931d5cf-1b73-408a-8149-2c4bada047ec (Parent Command ExportVm,
Parameters Type
org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters) ended with
failure:
-- Result: cleanSuccess
-- Message: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = low level Image copy failed,
-- Exception: VDSGenericException: VDSErrorException: Failed to
HSMGetAllTasksStatusesVDS, error = low level Image copy failed
There is still plenty of disk space on the nfs export domain.
Network has not been interrupted or whatever.
There are snapshots but I checked the collapse box when exporting.
What should I look at?
--
Nicolas Ecarnot
11 years, 2 months
[Users] Ovirt 3.3 Fedora 19 add gluster storage permissions error
by Steve Dainard
Hello,
New Ovirt 3.3 install on Fedora 19.
When I try to add a gluster storage domain I get the following:
*UI error:*
*Error while executing action Add Storage Connection: Permission settings
on the specified path do not allow access to the storage.*
*Verify permission settings on the specified storage path.*
*VDSM logs contain:*
Thread-393::DEBUG::2013-09-19
11:59:42,399::BindingXMLRPC::177::vds::(wrapper) client [10.0.0.34]
Thread-393::DEBUG::2013-09-19
11:59:42,399::task::579::TaskManager.Task::(_updateState)
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state init ->
state preparing
Thread-393::INFO::2013-09-19
11:59:42,400::logUtils::44::dispatcher::(wrapper) Run and protect:
connectStorageServer(domType=7,
spUUID='00000000-0000-0000-0000-000000000000', conList=[{'port': '',
'connection': '192.168.1.1:/rep2-virt', 'iqn': '', 'portal': '', 'user':
'', 'vfs_type': 'glusterfs', 'password': '******', 'id':
'00000000-0000-0000-0000-000000000000'}], options=None)
Thread-393::DEBUG::2013-09-19
11:59:42,405::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n
/usr/bin/mount -t glusterfs 192.168.1.1:/rep2-virt
/rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt' (cwd None)
Thread-393::DEBUG::2013-09-19
11:59:42,490::mount::226::Storage.Misc.excCmd::(_runcmd) '/usr/bin/sudo -n
/usr/bin/umount -f -l /rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt'
(cwd None)
Thread-393::ERROR::2013-09-19
11:59:42,505::hsm::2382::Storage.HSM::(connectStorageServer) Could not
connect to storageServer
Traceback (most recent call last):
File "/usr/share/vdsm/storage/hsm.py", line 2379, in connectStorageServer
conObj.connect()
File "/usr/share/vdsm/storage/storageServer.py", line 227, in connect
raise e
StorageServerAccessPermissionError: Permission settings on the specified
path do not allow access to the storage. Verify permission settings on the
specified storage path.: 'path =
/rhev/data-center/mnt/glusterSD/192.168.1.1:_rep2-virt'
Thread-393::DEBUG::2013-09-19
11:59:42,506::hsm::2396::Storage.HSM::(connectStorageServer) knownSDs: {}
Thread-393::INFO::2013-09-19
11:59:42,506::logUtils::47::dispatcher::(wrapper) Run and protect:
connectStorageServer, Return response: {'statuslist': [{'status': 469,
'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-393::DEBUG::2013-09-19
11:59:42,506::task::1168::TaskManager.Task::(prepare)
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::finished: {'statuslist':
[{'status': 469, 'id': '00000000-0000-0000-0000-000000000000'}]}
Thread-393::DEBUG::2013-09-19
11:59:42,506::task::579::TaskManager.Task::(_updateState)
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::moving from state preparing ->
state finished
Thread-393::DEBUG::2013-09-19
11:59:42,506::resourceManager::939::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-393::DEBUG::2013-09-19
11:59:42,507::resourceManager::976::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-393::DEBUG::2013-09-19
11:59:42,507::task::974::TaskManager.Task::(_decref)
Task=`12c38fec-0072-4974-a8e3-9125b3908246`::ref 0 aborting False
*Other info:*
- I have two nodes, ovirt001, ovirt002 they are both Fedora 19.
- The gluster bricks are replicated and located on the nodes.
(ovirt001:rep2-virt, ovirt002:rep2-virt)
- Local directory for the mount, I changed permissions on glusterSD to 777,
it was 755, and there is nothing in that directory:
[root@ovirt001 mnt]# pwd
/rhev/data-center/mnt
[root@ovirt001 mnt]# ll
total 4
drwxrwxrwx. 2 vdsm kvm 4096 Sep 19 12:18 glusterSD
I find it odd that the UUID's listed in the vdsm logs are zero's..
Appreciate any help,
*Steve
*
11 years, 2 months
[Users] noVNV : Just a "Thank you"
by Nicolas Ecarnot
Hi,
This is just a useless message to say thank you to anyone who will feel
implied, for having allowed the use of noVNC console in oVirt, for those
who wrote the wiki page and to code involved.
I'm so glad to have this feature that I wanted to thank you all.
That's all.
--
Nicolas Ecarnot
11 years, 2 months
[Users] deleting a disk that doesn't exist into the storage domain but referenced into webadmin
by Nathanaël Blanchet
This is a multi-part message in MIME format.
--------------070805040404030808020408
Content-Type: text/plain; charset=ISO-8859-1; format=flowed
Content-Transfer-Encoding: 8bit
Hello,
On ovirt 3.2 el6 I created a disk attached to a vm but I can't remove
it anymore. When I look to the log I have this message :
2013-10-11 23:43:23,086 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-47)
[4da21e03] Command DeleteImageGroupVDS execution failed. Exception:
IrsOperationFailedNoFailoverException: IRSGenericException:
IRSErrorException: Image does not exist in domain:
'image=763f6930-16b6-4afc-a39e-4fab148e15cf,
domain=5ef8572c-0ab5-4491-994a-e4c30230a525
So it is referenced into the database but doesn't exist on the storage
domain. I want to erase it from the webadmin, and I try to find the
reference of thios disk in the pg database so I can delete it but I
haven't found it. How can I do this?
ps : it appears in the webadmin as a floating disk but I can't attach it
to any vm.
--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr a
--------------070805040404030808020408
Content-Type: text/html; charset=ISO-8859-1
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<font size="-1">Hello,<br>
<br>
On ovirt 3.2 el6 I created a disk attached to a vm but I can't
remove it anymore. When I look to the log I have this message : <br>
</font>2013-10-11 23:43:23,086 ERROR
[org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-47)
[4da21e03] Command DeleteImageGroupVDS execution failed. Exception:
IrsOperationFailedNoFailoverException: IRSGenericException:
IRSErrorException: Image does not exist in domain:
'image=763f6930-16b6-4afc-a39e-4fab148e15cf,
domain=5ef8572c-0ab5-4491-994a-e4c30230a525<br>
So it is referenced into the database but doesn't exist on the
storage domain. I want to erase it from the webadmin, and I try to
find the reference of thios disk in the pg database so I can delete
it but I haven't found it. How can I do this? <br>
ps : it appears in the webadmin as a floating disk but I can't
attach it to any vm.<br>
<pre class="moz-signature" cols="72">--
Nathanaël Blanchet
Supervision réseau
Pôle exploitation et maintenance
Département des systèmes d'information
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
<a class="moz-txt-link-abbreviated" href="mailto:blanchet@abes.fr">blanchet(a)abes.fr</a> a</pre>
</body>
</html>
--------------070805040404030808020408--
11 years, 2 months
Re: [Users] Host USB
by emitor@gmail.com
I've installed the hook on both host that belong to the cluster where the
VM is but I don't get the option to configure. I've also pinned the vm to a
host but I get the same options as in any other vm from the cluster.
There is something that I'm not doing?
Regards!
2013/10/11 Eduardo Ramos <eduardo(a)freedominterface.org>
> You're welcome!
>
>
> On 10/11/2013 02:13 PM, emitor(a)gmail.com wrote:
>
> Oh! great! i though that was by modifying the xml "by hand".
>
> Thanks!
>
>
> 2013/10/11 Eduardo Ramos <eduardo(a)freedominterface.org>
>
>> Emitor,
>>
>> You won't put it into a XML. You will configure it in ovirt webadmin.
>>
>> First you have to install hostusb hook on the host machine. Then editing
>> your virtual machine, go to the 'Custom Properties' tab. There, select
>> 'hostusb' and in the right textbox, put the id. Example: 0x1234:0xbeef.
>>
>> You can define several ids, putting '&' between them:
>> 0x1234:0xbeef&0x2222:0xabaa.
>>
>> http://imagebin.org/273393
>>
>> I hope it is what you want.
>>
>>
>> On 10/11/2013 01:56 PM, emitor(a)gmail.com wrote:
>>
>> Thanks for your answer Eduardo, but i don't know which is the file where
>> i have to put the '0x', I mean the XML file that describes the VM. Where is
>> it located?
>>
>> Regards!
>>
>>
>> 2013/10/11 Eduardo Ramos <eduardo(a)freedominterface.org>
>>
>>> Hi my friend!
>>>
>>> On the host, you can run 'lsusb' command. It will return you some like
>>> this:
>>>
>>> Bus 002 Device 004: ID 413c:2106 Dell Computer Corp. Dell QuietKey
>>> Keyboard
>>>
>>> You just add '0x' in the begining of ids.
>>>
>>>
>>>
>>>
>>> On 10/11/2013 01:17 PM, emitor(a)gmail.com wrote:
>>>
>>> Hi,
>>>
>>> I would like to implement the USB pass through from a host to a VM. I
>>> don't know how to configure the hook that allow me to do this. Could you
>>> give me some guidance with this?
>>>
>>> I''ve readed this: http://www.ovirt.org/VDSM-Hooks/hostusb
>>>
>>> But I don't know where is located the "VM XML" that it's mentioned
>>> there.
>>>
>>> Regards!
>>>
>>>
>>> _______________________________________________
>>> Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>>
>>
>>
>> --
>> *Emiliano Tortorella*
>> +598 98941176 <%2B598%2098941176>
>> emitor(a)gmail.com
>>
>>
>>
>
>
> --
> *Emiliano Tortorella*
> +598 98941176
> emitor(a)gmail.com
>
>
>
--
*Emiliano Tortorella*
+598 98941176
emitor(a)gmail.com
11 years, 2 months
[Users] noVNC server disconnects
by Mitja Mihelič
Hi!
I am trying to setup noVNC. As I try to open a console I get a red bar
at the top of the page saying "Server disconnected (code: 1006)".
I have followed the instructions here:
http://www.ovirt.org/Features/noVNC_console (Setup Websocket Proxy on
the Enging Post Install).
I have imported the https://server.example.com/ca.crt into my browser
(Firefox and Chrome).
The only thing that gets logged to server.log when I try to open the
noVNC console is:
2013-10-14 16:39:51,803 INFO
[org.ovirt.engine.core.bll.SetVmTicketCommand] (ajp--127.0.0.1-8702-3)
Running command: SetVmTicketCommand internal: false. Entities affected
: ID: 43efe22e-67c2-4bb2-a089-05dc8770efd6 Type: VM
2013-10-14 16:39:51,807 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--127.0.0.1-8702-3) START, SetVmTicketVDSCommand(HostName =
server.example.com, HostId = bf21cb8f-9a63-40b9-9f18-4f6fc74f74f1,
vmId=43efe22e-67c2-4bb2-a089-05dc8770efd6, ticket=aicJH+Rf0F/C,
validTime=120,m userName=myuser,
userId=fa4c5381-04d8-11e3-9045-b4f33f63d7d4), log id: 3504f642
2013-10-14 16:39:51,848 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--127.0.0.1-8702-3) FINISH, SetVmTicketVDSCommand, log id: 3504f642
We are running a fresh oVirt 3.3 on CentOS 6.4 with the following
versions of packages:
ovirt-engine-3.3.0-4.el6.noarch
ovirt-engine-backend-3.3.0-4.el6.noarch
ovirt-engine-cli-3.3.0.4-1.el6.noarch
ovirt-engine-dbscripts-3.3.0-4.el6.noarch
ovirt-engine-lib-3.3.0-4.el6.noarch
ovirt-engine-restapi-3.3.0-4.el6.noarch
ovirt-engine-sdk-python-3.3.0.6-1.el6.noarch
ovirt-engine-setup-3.3.0-4.el6.noarch
ovirt-engine-setup-plugin-allinone-3.3.0-4.el6.noarch
ovirt-engine-tools-3.3.0-4.el6.noarch
ovirt-engine-userportal-3.3.0-4.el6.noarch
ovirt-engine-webadmin-portal-3.3.0-4.el6.noarch
ovirt-engine-websocket-proxy-3.3.0-4.el6.noarch
ovirt-host-deploy-1.1.1-1.el6.noarch
ovirt-host-deploy-java-1.1.1-1.el6.noarch
ovirt-host-deploy-offline-1.1.1-1.el6.noarch
ovirt-image-uploader-3.3.0-1.el6.noarch
ovirt-iso-uploader-3.3.0-1.el6.noarch
ovirt-log-collector-3.3.0-1.el6.noarch
ovirt-release-el6-8-1.noarch
novnc-0.4-2.el6.noarch
Kind regards,
Mitja
--
--
Mitja Mihelič
ARNES, Tehnološki park 18, p.p. 7, SI-1001 Ljubljana, Slovenia
tel: +386 1 479 8877, fax: +386 1 479 88 78
11 years, 2 months
[Users] Can't delete snapshot
by Eduardo Ramos
Hi friends!
I'm trying delete a snapshot but webadmin returns me:
"Failed to complete Snapshot bla deletion on VM piromba.dem.inpe.br."
In vdsm.log of SPM, I got:
Oct 10 16:24:10 newton kernel: end_request: I/O error, dev dm-24, sector
2147483632
Oct 10 16:24:10 newton kernel: end_request: I/O error, dev dm-24, sector 0
Oct 10 16:24:11 newton kernel: end_request: I/O error, dev dm-24, sector
2147483520
Oct 10 16:24:11 newton kernel: end_request: I/O error, dev dm-24, sector
2147483632
Oct 10 16:24:11 newton kernel: end_request: I/O error, dev dm-24, sector 0
Oct 10 16:24:11 newton kernel: end_request: I/O error, dev dm-24, sector
2147483520
Oct 10 16:24:11 newton kernel: end_request: I/O error, dev dm-24, sector
2147483632
Oct 10 16:24:11 newton kernel: end_request: I/O error, dev dm-24, sector 0
Oct 10 16:24:12 newton vdsm TaskManager.Task ERROR
Task=`767206b3-87e9-4686-b06c-6fba24bdb677`::Unexpected
error#012Traceback (most recent call last):#012 File
"/usr/share/vdsm/storage/task.py", line 861, in _run#012 return
fn(*args, **kargs)#012 File "/usr/share/vdsm/storage/task.py", line
320, in run#012 return self.cmd(*self.argslist, **self.argsdict)#012
File "/usr/share/vdsm/storage/securable.py", line 63, in wrapper#012
return f(self, *args, **kwargs)#012 File
"/usr/share/vdsm/storage/sp.py", line 1786, in mergeSnapshots#012
image.Image(repoPath).merge(sdUUID, vmUUID, imgUUID, ancestor,
successor, postZero)#012 File "/usr/share/vdsm/storage/image.py", line
1084, in merge#012 allVols = sdDom.getAllVolumes()#012 File
"/usr/share/vdsm/storage/blockSD.py", line 869, in getAllVolumes#012
return getAllVolumes(self.sdUUID)#012 File
"/usr/share/vdsm/storage/blockSD.py", line 168, in getAllVolumes#012
and vImg not in res[vPar]['imgs']:#012KeyError:
'63650a24-7e83-4c0a-851d-0ce9869a294d'
[root@newton ~]# ls -l /dev/mapper/
total 0
lrwxrwxrwx. 1 root root 8 Oct 8 17:34 1IET_00010001 -> ../dm-24
Anyone knows what is this device?
Thanks.
11 years, 2 months