Is there a plan to allow VM migration?
by John Gardeniers
As per the subject, is there a plan to allow VM migration? By that I
don't mean the current idea of detaching the export storage and
attaching it to another compatible system. I mean a real migration to
something external to the system the VM is currently on.
At least in my universe, there are times when it is highly desirable or
even necessary to take an Ovirt/RHEV VM and move or copy it to another
system, such as between completely separate Ovirt/RHEV systems or to
other systems such as KVM/Qemu or VirtualBox. The nearest I can manage
right now is to use an imaging tool to make a copy of the drive and
import that into the destination machine. That's slow, cumbersome and
still requires the VMs configuration to be manually duplicated.
regards,
John
10 years, 4 months
Not able to set Hostname for guest OS from cloud init using API
by Chandrahasa S
This is a multipart message in MIME format.
--=_alternative 0032BF4465257D2C_=
Content-Type: text/plain; charset="US-ASCII"
Dear All,
We are not able Set Hostname to guest OS through API using cloud init.
Other information like IP, Subnet, Gateway is working.
Kindly help.
Regards,
Chandrahasa S
Tata Consultancy Services
Data Center- ( Non STPI)
2nd Pokharan Road,
Subash Nagar ,
Mumbai - 400601,Maharashtra
India
Ph:- +91 22 677-81825
Buzz:- 4221825
Mailto: chandrahasa.s(a)tcs.com
Website: http://www.tcs.com
____________________________________________
Experience certainty. IT Services
Business Solutions
Consulting
____________________________________________
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
--=_alternative 0032BF4465257D2C_=
Content-Type: text/html; charset="US-ASCII"
<font size=2 face="sans-serif">Dear All,</font>
<br>
<br><font size=2 face="sans-serif">We are not able Set Hostname to guest
OS through API using cloud init. Other information like IP, Subnet, Gateway
is working.</font>
<br>
<br><font size=2 face="sans-serif">Kindly help.</font>
<br>
<br><font size=2 face="sans-serif">Regards,<br>
Chandrahasa S<br>
Tata Consultancy Services<br>
Data Center- ( Non STPI)<br>
2nd Pokharan Road,<br>
Subash Nagar ,<br>
Mumbai - 400601,Maharashtra<br>
India<br>
Ph:- +91 22 677-81825<br>
Buzz:- 4221825<br>
Mailto: chandrahasa.s(a)tcs.com<br>
Website: </font><a href=http://www.tcs.com/><font size=2 color=blue face="sans-serif">http://www.tcs.com</font></a><font size=2 face="sans-serif"><br>
____________________________________________<br>
Experience certainty. IT Services<br>
Business Solutions<br>
Consulting<br>
____________________________________________</font><p>=====-----=====-----=====<br>
Notice: The information contained in this e-mail<br>
message and/or attachments to it may contain <br>
confidential or privileged information. If you are <br>
not the intended recipient, any dissemination, use, <br>
review, distribution, printing or copying of the <br>
information contained in this e-mail message <br>
and/or attachments to it are strictly prohibited. If <br>
you have received this communication in error, <br>
please notify us by reply e-mail or telephone and <br>
immediately and permanently delete the message <br>
and any attachments. Thank you</p>
<p></p>
--=_alternative 0032BF4465257D2C_=--
10 years, 4 months
[QE][ACTION REQUIRED] oVirt 3.5.0 RC status
by Sandro Bonazzola
Hi,
since we had still one blocker on oVirt 3.5.0 RC1 release, we'll need a second RC build.
Suggested date is Mon 2014-08-11.
Maintainers:
- Please be sure that 3.5 snapshot allow to create VMs before *2014-08-10 15:00 UTC*
- Please be sure that no pending patches are going to block the release before *2014-08-10 15:00 UTC*
- If any patch must block the RC release please raise the issue as soon as possible.
The bug tracker [1] shows the following proposed blockers to be reviewed:
Bug ID Whiteboard Status Summary
1124099 storage POST Live Merge: Limit merge operations based on hosts' capabilities
1126800 virt POST Adding hosts to oVirt results in error message
Feature freeze is now effective, and branch has been created.
All new patches must be backported to 3.5 branch too.
Features completed are marked in green on Features Status Table [2]
There are still 412 bugs [3] targeted to 3.5.0.
Excluding node and documentation bugs we still have 364 bugs [4] targeted to 3.5.0.
We still have issues also with package building and dependencies:
https://fedorahosted.org/ovirt/ticket/242 - vdsm_3.5_create-rpms_merged broken on EL7 slave
package: ovirt-optimizer-0.2-2.el6.noarch from check-custom-el6
unresolved deps:
protobuf-java >= 0:2.5
package: ovirt-optimizer-jetty-0.2-2.el6.noarch from check-custom-el6
unresolved deps:
resteasy
jetty >= 0:9
cdi-api
Maintainers / Assignee:
- Please check ensure that completed features are marked in green on Features Status Table [2]
- Please remember to rebuild your packages before *2014-08-10 15:00* if needed, otherwise nightly snapshot will be taken.
- If you find a blocker bug please remember to add it to the tracker [1]
- Please fill release notes, the page has been created here [5]
- Please review and add test cases to oVirt 3.5 Third Test Day [6]
- Please update the target to 3.5.1 or later for bugs that won't be in 3.5.0:
it will ease gathering the blocking bugs for next releases.
Community:
- save the date for third test day scheduled on 2014-08-12!
- You're welcome to join us testing next beta release and getting involved in oVirt Quality Assurance[7]
[1] http://bugzilla.redhat.com/1073943
[2] http://bit.ly/17qBn6F
[3] http://red.ht/1pVEk7H
[4] http://red.ht/1zT2mSq
[5] http://www.ovirt.org/OVirt_3.5_Release_Notes
[6] http://www.ovirt.org/OVirt_3.5_TestDay
[7] http://www.ovirt.org/OVirt_Quality_Assurance
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 4 months
[QE][ACTION NEEDED] oVirt 3.4.4 RC status
by Sandro Bonazzola
Hi,
We're going to start composing oVirt 3.4.4 RC on *2014-09-09 08:00 UTC* from 3.4 branch.
Maintainers:
- Please be sure that 3.4 snapshot allow to create VMs before *2014-09-08 15:00 UTC*
- Please be sure that no pending patches are going to block the release before *2014-09-08 15:00 UTC*
- If any patch must block the RC release please raise the issue as soon as possible.
A bug tracker [1] has been opened and shows no open blockers.
There are still 13 bugs [2] targeted to 3.4.4.
Excluding node and documentation bugs we still have 7 bugs [3] targeted to 3.4.4.
Whiteboard Bug ID Status Summary
network 1112688 NEW [Neutron integration] Log collection is missing for Neutron appliance
network 1048880 NEW [vdsm][openstacknet] Migration fails for vNIC using OVS + security groups
network 1001186 NEW With AIO installer and NetworkManager enabled, the ovirtmgmt bridge is not properly configured
node 1097735 NEW "Reboot" button failed to work in progress_page with serial console to install ovirt-node iso.
node 988341 NEW Should not create bond when report an error in configuration process
node 1023481 ASSIGNED Sane and working default libvirt config
node 753306 NEW SR-IOV support
node 995321 NEW remove existing efi entries "oVirt Node Hypervisor" in UEFI menu failed
node 969340 NEW Migrate ovirt-node-installer backend and ovirt-auto-install backend to new code base
sla 1059309 NEW [events] 'Available memory of host $host (...) under defined threshold...' is logged only once
storage 1111655 NEW Disks imported from Export Domain to Data Domain are converted to Preallocated after upgrade ...
virt 1070890 POST Run vm with odd number of cores drop libvirt error
virt 1126887 NEW recovery of VMs after VDSM restart doesn't work on PPC
Maintainers / Assignee:
- Please add the bugs to the tracker if you think that 3.4.4 should not be released without them fixed.
- Please update the target to 3.5.1 or later for bugs that won't be in 3.4.4:
it will ease gathering the blocking bugs for next releases.
- Please fill release notes, the page has been created here [4]
Community:
- If you're testing oVirt 3.4 nightly snapshot, please add yourself to the test page [5]
[1] http://bugzilla.redhat.com/1118689
[2] http://red.ht/1qwhPXB
[3] http://red.ht/1sQDLwg
[4] http://www.ovirt.org/OVirt_3.4.4_Release_Notes
[5] http://www.ovirt.org/Testing/oVirt_3.4.4_Testing
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 4 months
Re: [ovirt-users] Guest VM Console Creation/Access using REST API and noVNC
by Shanil S
Hi Sven,
Regarding the ticket "path", Is it the direct combination of host and port
? suppose if the host is 1.2.3.4 and the port is 5100 then what should be
the "path" value ? Is there encryption needs here ?
>>so you have access from the browser to the websocket-proxy, network
wise? can you ping the proxy?
and the websocket proxy can reach the host where the vm runs?
yes.. there should be no firewall issue as we can access the console from
ovirt engine portal
Do we need to allow our own portal ip address in the ovirt engine and
hypervisiors also ???
--
Regards
Shanil
On Wed, Jul 16, 2014 at 3:13 PM, Sven Kieske <S.Kieske(a)mittwald.de> wrote:
>
>
> Am 16.07.2014 11:30, schrieb Shanil S:
> > We will get the ticket details like host,port and password from the
> ticket
> > api funcion call but didn't get the "path" value. Will it get it from the
> > ticket details ? i couldn't find out any from the ticket details.
>
> the "path" is the combination of host and port.
>
> so you have access from the browser to the websocket-proxy, network
> wise? can you ping the proxy?
> and the websocket proxy can reach the host where the vm runs?
> are you sure there are no firewalls in between?
> also you should pay attention on how long your ticket
> is valid, you can specify the duration in minutes in your api call.
>
> --
> Mit freundlichen Grüßen / Regards
>
> Sven Kieske
>
> Systemadministrator
> Mittwald CM Service GmbH & Co. KG
> Königsberger Straße 6
> 32339 Espelkamp
> T: +49-5772-293-100
> F: +49-5772-293-333
> https://www.mittwald.de
> Geschäftsführer: Robert Meyer
> St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
> Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
>
10 years, 4 months
Standalone Websocket Proxy not working
by Punit Dambiwal
Hi All,
I have followed the following document to install the websocket proxy on
seperate machine :-
http://www.ovirt.org/Features/noVNC_console#Setup_Websocket_Proxy_on_a_Se...
But when i try to open the VNC console it failed with the following errors
:-
""Server disconnected (code: 1006)""
Engine Logs :-
-----------------
2014-08-05 15:51:22,540 INFO
[org.ovirt.engine.core.bll.SetVmTicketCommand] (ajp--127.0.0.1-8702-5)
[6101845f] Running command: SetVmTicketCommand internal: false. Entities
affected : ID: 6e0caf73-ae7d-493e-a51d-ecc32f507f00 Type: VM
2014-08-05 15:51:22,574 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--127.0.0.1-8702-5) [6101845f] START, SetVmTicketVDSCommand(HostName =
Quanta, HostId = 10d7b6ea-d9fa-46af-bcd7-1d7b3c15b5ca,
vmId=6e0caf73-ae7d-493e-a51d-ecc32f507f00, ticket=5r7OgcpCeGCt,
validTime=120,m userName=admin,
userId=fdfc627c-d875-11e0-90f0-83df133b58cc), log id: 67f98489
2014-08-05 15:51:22,596 INFO
[org.ovirt.engine.core.vdsbroker.vdsbroker.SetVmTicketVDSCommand]
(ajp--127.0.0.1-8702-5) [6101845f] FINISH, SetVmTicketVDSCommand, log id:
67f98489
2014-08-05 15:51:22,623 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(ajp--127.0.0.1-8702-5) [6101845f] Correlation ID: 6101845f, Call Stack:
null, Custom Event ID: -1, Message: user admin initiated console session
for VM test1
-----------------
When I navigate to https://websocketproxyip:<port> to accept the
certificate the first time... it throws an error "The connection was reset"
Thanks,
Punit
10 years, 4 months
Sharing iSCSI data stroage domain across multiple clusters in the same datacenter
by santosh
This is a multi-part MIME message.
--=_reb-r6DA4578F-t53D9491B
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
*
**Can we share the iSCSI data storage domain across multiple clusters in
the same datacenter?*
Following are the setup details which I tried.
- One datacenter, Say DC1
- in DC1, two clusters, say CL1 and CL2
- In CL1, one host, say H1. And in CL2 one host, say H2
- iSCSI Data Storage domain is configured where external storage
LUNs are exported to host H1(A host in CL1 of Datacenter).
While adding H1 to CL1 is succeeded; addition of H2 in CL2 is failing
with following error in vdsm.log.
Traceback (most recent call last):
File "/usr/share/vdsm/storage/task.py", line 873, in _run
return fn(*args, **kargs)
File "/usr/share/vdsm/logUtils.py", line 45, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 1020, in
connectStoragePool
spUUID, hostID, msdUUID, masterVersion, domainsMap)
File "/usr/share/vdsm/storage/hsm.py", line 1091, in
_connectStoragePool
res = pool.connect(hostID, msdUUID, masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 630, in connect
self.__rebuild(msdUUID=msdUUID, masterVersion=masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1153, in __rebuild
self.setMasterDomain(msdUUID, masterVersion)
File "/usr/share/vdsm/storage/sp.py", line 1360, in setMasterDomain
raise se.StoragePoolMasterNotFound(self.spUUID, msdUUID)
StoragePoolMasterNotFound: Cannot find master domain:
'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'
Thread-13::DEBUG::2014-07-30
15:24:49,780::task::885::TaskManager.Task::(_run)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._run:
07997682-8d6b-42fd-acb3-1360f14860d6
('a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2', 2,
'741f7913-09ad-4d96-a225-3bda6d06e042', 1, None) {} failed -
stopping task
Thread-13::DEBUG::2014-07-30
15:24:49,780::task::1211::TaskManager.Task::(stop)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::stopping in state
preparing (force False)
Thread-13::DEBUG::2014-07-30
15:24:49,780::task::990::TaskManager.Task::(_decref)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 1 aborting True
*Thread-13::INFO::2014-07-30
15:24:49,780::task::1168::TaskManager.Task::(prepare)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::aborting: Task is
aborted: 'Cannot find master domain' - code 304*
Thread-13::DEBUG::2014-07-30
15:24:49,781::task::1173::TaskManager.Task::(prepare)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Prepare: aborted:
Cannot find master domain
Thread-13::DEBUG::2014-07-30
15:24:49,781::task::990::TaskManager.Task::(_decref)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 0 aborting True
Thread-13::DEBUG::2014-07-30
15:24:49,781::task::925::TaskManager.Task::(_doAbort)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._doAbort: force False
Thread-13::DEBUG::2014-07-30
15:24:49,781::resourceManager::977::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-13::DEBUG::2014-07-30
15:24:49,781::task::595::TaskManager.Task::(_updateState)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
preparing -> state aborting
Thread-13::DEBUG::2014-07-30
15:24:49,781::task::550::TaskManager.Task::(__state_aborting)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::_aborting: recover
policy none
Thread-13::DEBUG::2014-07-30
15:24:49,782::task::595::TaskManager.Task::(_updateState)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
aborting -> state failed
Thread-13::DEBUG::2014-07-30
15:24:49,782::resourceManager::940::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}
Thread-13::DEBUG::2014-07-30
15:24:49,782::resourceManager::977::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}
Thread-13::ERROR::2014-07-30
15:24:49,782::dispatcher::65::Storage.Dispatcher.Protect::(run)
{'status': {'message': "Cannot find master domain:
'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'", 'code': 304}}
_*Please advise if I need to have one Storage Domain per cluster in
given datacenter.*_
Thanks, Santosh.
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
--=_reb-r6DA4578F-t53D9491B
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<b><br>
</b><b>Can we share the iSCSI data storage domain across multiple
clusters in the same datacenter?</b><br>
<br>
Following are the setup details which I tried.<br>
<blockquote>- One datacenter, Say DC1<br>
- in DC1, two clusters, say CL1 and CL2<br>
- In CL1, one host, say H1. And in CL2 one host, say H2<br>
- iSCSI Data Storage domain is configured where external
storage LUNs are exported to host H1(A host in CL1 of Datacenter).
<br>
</blockquote>
<br>
While adding H1 to CL1 is succeeded; addition of H2 in CL2 is
failing with following error in vdsm.log.<br>
<blockquote><font color="#000099"><tt>Traceback (most recent call
last):</tt><tt><br>
</tt><tt> File "/usr/share/vdsm/storage/task.py", line 873, in
_run</tt><tt><br>
</tt><tt> return fn(*args, **kargs)</tt><tt><br>
</tt><tt> File "/usr/share/vdsm/logUtils.py", line 45, in
wrapper</tt><tt><br>
</tt><tt> res = f(*args, **kwargs)</tt><tt><br>
</tt><tt> File "/usr/share/vdsm/storage/hsm.py", line 1020, in
connectStoragePool</tt><tt><br>
</tt><tt> spUUID, hostID, msdUUID, masterVersion, domainsMap)</tt><tt><br>
</tt><tt> File "/usr/share/vdsm/storage/hsm.py", line 1091, in
_connectStoragePool</tt><tt><br>
</tt><tt> res = pool.connect(hostID, msdUUID, masterVersion)</tt><tt><br>
</tt><tt> File "/usr/share/vdsm/storage/sp.py", line 630, in
connect</tt><tt><br>
</tt><tt> self.__rebuild(msdUUID=msdUUID,
masterVersion=masterVersion)</tt><tt><br>
</tt><tt> File "/usr/share/vdsm/storage/sp.py", line 1153, in
__rebuild</tt><tt><br>
</tt><tt> self.setMasterDomain(msdUUID, masterVersion)</tt><tt><br>
</tt><tt> File "/usr/share/vdsm/storage/sp.py", line 1360, in
setMasterDomain</tt><tt><br>
</tt><tt> raise se.StoragePoolMasterNotFound(self.spUUID,
msdUUID)</tt><tt><br>
</tt><tt>StoragePoolMasterNotFound: Cannot find master domain:
'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,780::task::885::TaskManager.Task::(_run)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._run:
07997682-8d6b-42fd-acb3-1360f14860d6
('a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2', 2,
'741f7913-09ad-4d96-a225-3bda6d06e042', 1, None) {} failed -
stopping task</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,780::task::1211::TaskManager.Task::(stop)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::stopping in state
preparing (force False)</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,780::task::990::TaskManager.Task::(_decref)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 1 aborting
True</tt><tt><br>
</tt><b><font color="#cc0000"><tt>Thread-13::<a class="moz-txt-link-freetext" href="INFO::2014-07-30">INFO::2014-07-30</a>
15:24:49,780::task::1168::TaskManager.Task::(prepare)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::aborting:
Task is aborted: 'Cannot find master domain' - code 304</tt></font></b><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,781::task::1173::TaskManager.Task::(prepare)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Prepare: aborted:
Cannot find master domain</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,781::task::990::TaskManager.Task::(_decref)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::ref 0 aborting
True</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,781::task::925::TaskManager.Task::(_doAbort)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::Task._doAbort:
force False</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,781::resourceManager::977::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,781::task::595::TaskManager.Task::(_updateState)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
preparing -> state aborting</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,781::task::550::TaskManager.Task::(__state_aborting)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::_aborting:
recover policy none</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,782::task::595::TaskManager.Task::(_updateState)
Task=`07997682-8d6b-42fd-acb3-1360f14860d6`::moving from state
aborting -> state failed</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,782::resourceManager::940::ResourceManager.Owner::(releaseAll)
Owner.releaseAll requests {} resources {}</tt><tt><br>
</tt><tt>Thread-13::DEBUG::2014-07-30
15:24:49,782::resourceManager::977::ResourceManager.Owner::(cancelAll)
Owner.cancelAll requests {}</tt><tt><br>
</tt><tt>Thread-13::ERROR::2014-07-30
15:24:49,782::dispatcher::65::Storage.Dispatcher.Protect::(run)
{'status': {'message': "Cannot find master domain:
'spUUID=a4dfaf64-adfa-4cfa-88d5-986fbdb2b2b2,
msdUUID=741f7913-09ad-4d96-a225-3bda6d06e042'", 'code': 304}}</tt></font><tt><br>
</tt><br>
</blockquote>
<u><b>Please advise if I need to have one Storage Domain per cluster
in given datacenter.</b></u><br>
<br>
Thanks, Santosh.<br>
<br>
</body>
</html>
<pre>
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************</pre>
--=_reb-r6DA4578F-t53D9491B--
10 years, 4 months
[ANN] oVirt 3.5.0 First Release Candidate is now, available for testing
by Sandro Bonazzola
The oVirt team is pleased to announce that the 3.5.0 First Release Candidate is now
available for testing as of Aug 5th 2014.
The release candidate is available now for Fedora 19, Fedora 20 and Red Hat Enterprise Linux 6.5
(or similar) and allow you to use Red Hat Enterprise Linux 7 as node and run Hosted Engine.
Feel free to join us testing it on Tue Aug 12th third test day!
This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.
The existing repository ovirt-3.5-pre has been updated for delivering this
release without the need of enabling any other repository.
If you're already using ovirt repository on EL7, please update ovirt-release3.5 rpm,
it will provide additional repositories for it.
Please refer to release notes [1] for Installation / Upgrade instructions.
New oVirt Live, oVirt Guest Tools and oVirt Node ISO will be available soon as well[2].
Please note that mirrors may need a couple of days before being synchronized.
If you want to be sure to use latest rpms and don't want to wait for the mirrors,
you can edit /etc/yum.repos.d/ovirt-3.5.repo commenting the mirror line and
removing the comment on baseurl line.
Known issues in this RC:
- Bug 1124099 - Live Merge: Limit merge operations based on hosts' capabilities
- ovirt-optimizer has not been updated for EL6 due to dependencies issues
- vdsm for EL7 is missing a couple of patches compared to other distro due to building issues
[1] http://www.ovirt.org/OVirt_3.5_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 4 months
SPM in oVirt 3.6
by Daniel Helgenberger
--=-OOG3Xe/0d4ONU+KOPeMY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Hello,
just out of pure curiosity: In a BZ [1] Allon mentions SPM will go away
in ovirt 3.6.
This seems like a major change for me. I assume this will replace
sanlock as well? What will SPM be replaced with?
Thanks,
Daniel=20
[1] https://bugzilla.redhat.com/show_bug.cgi?id=3D1116558#c9
--=20
Daniel Helgenberger=20
m box bewegtbild GmbH=20
P: +49/30/2408781-22
F: +49/30/2408781-10
ACKERSTR. 19=20
D-10115 BERLIN=20
www.m-box.de www.monkeymen.tv=20
Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767=20
--=-OOG3Xe/0d4ONU+KOPeMY
Content-Type: application/x-pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIN9zCCBFcw
ggM/oAMCAQICCwQAAAAAAS9O4TFGMA0GCSqGSIb3DQEBBQUAMFcxCzAJBgNVBAYTAkJFMRkwFwYD
VQQKExBHbG9iYWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
aWduIFJvb3QgQ0EwHhcNMTEwNDEzMTAwMDAwWhcNMTkwNDEzMTAwMDAwWjBUMQswCQYDVQQGEwJC
RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25h
bFNpZ24gMiBDQSAtIEcyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwWtB+TXs+BJ9
3SJRaV+3uRNGJ3cUO+MTgW8+5HQXfgy19CzkDI1T1NwwICi/bo4R/mYR5FEWx91//eE0ElC/89iY
7GkL0tDasmVx4TOXnrqrsziUcxEPPqHRE8x4NhtBK7+8o0nsMIJMA1gyZ2FA5To2Ew1BBuvovvDJ
+Nua3qOCNBNu+8A+eNpJlVnlu/qB7+XWaPXtUMlsIikxD+gREFVUgYE4VzBuLa2kkg0VLd09XkE2
ceRDm6YgRATuDk6ogUyX4OLxCGIJF8yi6Z37M0wemDA6Uff0EuqdwDQd5HwG/rernUjt1grLdAxq
8BwywRRg0eFHmE+ShhpyO3Fi+wIDAQABo4IBJTCCASEwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB
/wQIMAYBAf8CAQAwHQYDVR0OBBYEFD8V0m18L+cxnkMKBqiUbCw7xe5lMEcGA1UdIARAMD4wPAYE
VR0gADA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5
LzAzBgNVHR8ELDAqMCigJqAkhiJodHRwOi8vY3JsLmdsb2JhbHNpZ24ubmV0L3Jvb3QuY3JsMD0G
CCsGAQUFBwEBBDEwLzAtBggrBgEFBQcwAYYhaHR0cDovL29jc3AuZ2xvYmFsc2lnbi5jb20vcm9v
dHIxMB8GA1UdIwQYMBaAFGB7ZhpFDZfKiVAvfQTNNKj//P1LMA0GCSqGSIb3DQEBBQUAA4IBAQDI
WOF8oQHpI41wO21cUvjE819juuGa05F5yK/ESqW+9th9vfhG92eaBSLViTIJV7gfCFbt11WexfK/
44NeiJMfi5wX6sK7Xnt8QIK5lH7ZX1Wg/zK1cXjrgRaYUOX/MA+PmuRm4gWV0zFwYOK2uv4OFgaM
mVr+8en7K1aQY2ecI9YhEaDWOcSGj6SN8DvzPdE4G4tBk4/aIsUged9sGDqRYweKla3LTNjXPps1
Y+zsVbgHLtjdOIB0YZ1hrlAQcY2L/b+V+Yyoi7CMdOtmm1Rm6Jh5ILbwQTjlUCkgu5yVdfs9LDKc
M0SPeCldkjfaGVSd+nURMOUy3hfxsMVux9+FMIIEyjCCA7KgAwIBAgIRAJZpZsDepakv5CafojXo
PKcwDQYJKoZIhvcNAQEFBQAwVDELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYt
c2ExKjAoBgNVBAMTIUdsb2JhbFNpZ24gUGVyc29uYWxTaWduIDIgQ0EgLSBHMjAeFw0xMzA4Mjcx
NjU3NThaFw0xNjA4MjcxNjU3NThaMFgxCzAJBgNVBAYTAkRFMRwwGgYDVQQDExNEYW5pZWwgSGVs
Z2VuYmVyZ2VyMSswKQYJKoZIhvcNAQkBFhxkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlMIIB
IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzgFDm8+SeTU4Yt3WopJQgqZAuuNxyMlxiPuq
0C0D581goXz2nVVjhTCIVwX2MqWYD1Dyjy1hLHXothgWgZaiQ1EB4oVdmIFmIfIjR6SkR/Gjw3lx
MwJzEpxJhZXyyrOYE8Kgw2maJWgLx5zw2/lKpcffhVW0OY0t+JWWxPKiYFcAmQnb+fleonM8sUZZ
ZES08uRVVL67jbq+3+E2xCLlqQ2iJ1h5ej3wlyuZ4CkUnfMHYrG8zOIfHwsPirWACX026a1flgts
Kl1Yv0CRZ1c5qujcP3OPpDovIbBr9RBStl2DcFdzTuGMdmfp32963VLOlvKpClPMzrfJeJfWZ4Qy
UwIDAQABo4IBkTCCAY0wDgYDVR0PAQH/BAQDAgWgMEwGA1UdIARFMEMwQQYJKwYBBAGgMgEoMDQw
MgYIKwYBBQUHAgEWJmh0dHBzOi8vd3d3Lmdsb2JhbHNpZ24uY29tL3JlcG9zaXRvcnkvMCcGA1Ud
EQQgMB6BHGRhbmllbC5oZWxnZW5iZXJnZXJAbS1ib3guZGUwCQYDVR0TBAIwADAdBgNVHSUEFjAU
BggrBgEFBQcDAgYIKwYBBQUHAwQwQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL2NybC5nbG9iYWxz
aWduLmNvbS9ncy9nc3BlcnNvbmFsc2lnbjJnMi5jcmwwVQYIKwYBBQUHAQEESTBHMEUGCCsGAQUF
BzAChjlodHRwOi8vc2VjdXJlLmdsb2JhbHNpZ24uY29tL2NhY2VydC9nc3BlcnNvbmFsc2lnbjJn
Mi5jcnQwHQYDVR0OBBYEFLw0UD+6l35aKnDaePxEP8K35HYZMB8GA1UdIwQYMBaAFD8V0m18L+cx
nkMKBqiUbCw7xe5lMA0GCSqGSIb3DQEBBQUAA4IBAQBdVOm7h+E4sRMBbTN1tCIjAEgxmB5U0mdZ
XcawzEHLJxTrc/5YFBMGX2qPju8cuZV14XszMfRBJdlJz1Od+voJggianIhnFEAakCxaa1l/cmJ5
EDT6PgZAkXbMB5rU1dhegb35lJJkcFLEpR2tF1V0TfbSe5UZNPYeMQjYsRhs69pfKLoeGm4dSLK7
gsPT5EhPd+JPyNSIootOwClMP4CTxIsXQgRI5IDqG2Ku/r2YMMLsqWD11PtAE87t2mgohQ6V1XdW
FqGd1V+wN98oPumRRS8bld+1gRA7GVYMnO5MF6p//iHFcy3MVT05ojqgomMt+voH5cFzrHA61z80
xaZ6MIIEyjCCA7KgAwIBAgIRAJZpZsDepakv5CafojXoPKcwDQYJKoZIhvcNAQEFBQAwVDELMAkG
A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExKjAoBgNVBAMTIUdsb2JhbFNpZ24g
UGVyc29uYWxTaWduIDIgQ0EgLSBHMjAeFw0xMzA4MjcxNjU3NThaFw0xNjA4MjcxNjU3NThaMFgx
CzAJBgNVBAYTAkRFMRwwGgYDVQQDExNEYW5pZWwgSGVsZ2VuYmVyZ2VyMSswKQYJKoZIhvcNAQkB
FhxkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEAzgFDm8+SeTU4Yt3WopJQgqZAuuNxyMlxiPuq0C0D581goXz2nVVjhTCIVwX2MqWYD1Dy
jy1hLHXothgWgZaiQ1EB4oVdmIFmIfIjR6SkR/Gjw3lxMwJzEpxJhZXyyrOYE8Kgw2maJWgLx5zw
2/lKpcffhVW0OY0t+JWWxPKiYFcAmQnb+fleonM8sUZZZES08uRVVL67jbq+3+E2xCLlqQ2iJ1h5
ej3wlyuZ4CkUnfMHYrG8zOIfHwsPirWACX026a1flgtsKl1Yv0CRZ1c5qujcP3OPpDovIbBr9RBS
tl2DcFdzTuGMdmfp32963VLOlvKpClPMzrfJeJfWZ4QyUwIDAQABo4IBkTCCAY0wDgYDVR0PAQH/
BAQDAgWgMEwGA1UdIARFMEMwQQYJKwYBBAGgMgEoMDQwMgYIKwYBBQUHAgEWJmh0dHBzOi8vd3d3
Lmdsb2JhbHNpZ24uY29tL3JlcG9zaXRvcnkvMCcGA1UdEQQgMB6BHGRhbmllbC5oZWxnZW5iZXJn
ZXJAbS1ib3guZGUwCQYDVR0TBAIwADAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwQwYD
VR0fBDwwOjA4oDagNIYyaHR0cDovL2NybC5nbG9iYWxzaWduLmNvbS9ncy9nc3BlcnNvbmFsc2ln
bjJnMi5jcmwwVQYIKwYBBQUHAQEESTBHMEUGCCsGAQUFBzAChjlodHRwOi8vc2VjdXJlLmdsb2Jh
bHNpZ24uY29tL2NhY2VydC9nc3BlcnNvbmFsc2lnbjJnMi5jcnQwHQYDVR0OBBYEFLw0UD+6l35a
KnDaePxEP8K35HYZMB8GA1UdIwQYMBaAFD8V0m18L+cxnkMKBqiUbCw7xe5lMA0GCSqGSIb3DQEB
BQUAA4IBAQBdVOm7h+E4sRMBbTN1tCIjAEgxmB5U0mdZXcawzEHLJxTrc/5YFBMGX2qPju8cuZV1
4XszMfRBJdlJz1Od+voJggianIhnFEAakCxaa1l/cmJ5EDT6PgZAkXbMB5rU1dhegb35lJJkcFLE
pR2tF1V0TfbSe5UZNPYeMQjYsRhs69pfKLoeGm4dSLK7gsPT5EhPd+JPyNSIootOwClMP4CTxIsX
QgRI5IDqG2Ku/r2YMMLsqWD11PtAE87t2mgohQ6V1XdWFqGd1V+wN98oPumRRS8bld+1gRA7GVYM
nO5MF6p//iHFcy3MVT05ojqgomMt+voH5cFzrHA61z80xaZ6MYIC5zCCAuMCAQEwaTBUMQswCQYD
VQQGEwJCRTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQ
ZXJzb25hbFNpZ24gMiBDQSAtIEcyAhEAlmlmwN6lqS/kJp+iNeg8pzAJBgUrDgMCGgUAoIIBUzAY
BgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA3MjUxNjUxMzNaMCMG
CSqGSIb3DQEJBDEWBBTEbb/LAVcie/GXcqsGCDdp2VJ+PTB4BgkrBgEEAYI3EAQxazBpMFQxCzAJ
BgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSowKAYDVQQDEyFHbG9iYWxTaWdu
IFBlcnNvbmFsU2lnbiAyIENBIC0gRzICEQCWaWbA3qWpL+Qmn6I16DynMHoGCyqGSIb3DQEJEAIL
MWugaTBUMQswCQYDVQQGEwJCRTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMh
R2xvYmFsU2lnbiBQZXJzb25hbFNpZ24gMiBDQSAtIEcyAhEAlmlmwN6lqS/kJp+iNeg8pzANBgkq
hkiG9w0BAQEFAASCAQCZz7fyS+43Nq7RUulcEqzyO6bTA4+DgWia7H68pa8NS18G87inq3bQZEBD
9So16LbNLYACcyyLZsencTOfYrp0IGQIwKRHGtATsyfaVQXDXA64uxwgs0IC5vtzU9jgef0Ztkre
GW0puGZB7X0qpEJRw2ByUUkAW5xtWJ08jTu26M9KPjwrUEv+PFJ4vfKD/UGQZ+mBeKy5eCAfNEwg
ps7LF0u2wK2QectPYLlhPgjpLn6ffEyJ4kLOnr7+P8DXVE+VyLeVRMonRmD5mg2QV55h1DEuqND/
nQ+5mg9B0TCNkv6lUq6lL9NaQxCj0QyZm8tFIsLEG9eQedk4hv6yfLHlAAAAAAAA
--=-OOG3Xe/0d4ONU+KOPeMY--
10 years, 4 months
Error after changing IP of Node (FQDN is still the same)
by ml ml
Hello List,
i on my ovirt engine i am getting:
ERROR [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
(DefaultQuartzScheduler_Worker-73) [412fc539] Start SPM Task failed -
result: cleanSuccess, message: VDSGenericException: VDSErrorException:
Failed to HSMGetTaskStatusVDS, error = Cannot acquire host id, code = 661
The FQDN is still the same. I just changed the ips in /etc/hosts
Any idea?
Thanks,
Mario
10 years, 4 months