Spurious reboot?
by Derek Atkins
------=_Part_0_1480179727848
Content-Type: text/plain;
charset=utf-8
Content-Transfer-Encoding: base64
Content-Disposition: inline
SGksCkxhc3QgbmlnaHQgYXQgYXBwcm94aW1hdGVseSAyMjowMyAoZ2l2ZSBvciB0YWtlIGEgZmV3
IG1pbnV0ZXMpIG15IHNpbmdsZS1zZXJ2ZXIgb3ZpcnQgc3lzdGVtIHJlYm9vdGVkLiAgSSBzYXcg
YSByb290IGxvZ2luIChwcmVzdW1hYmx5IGZyb20gdGhlIGVuZ2luZSkgYW5kIHRoZW4gdGhlIGhv
c3QgcmVib290ZWQuCgpJIGRvbid0IHNlZSBhbnl0aGluZyBzcGVjaWZpYyBpbiB0aGUgbG9ncyB0
aGF0IHdvdWxkIGluZGljYXRlIFdIWSBpdCByZWJvb3RlZC4KCkFueSBzdWdnZXN0aW9ucyBmb3Ig
d2hlcmUgdG8gbG9vayBvciB3aGF0IHRvIGxvb2sgZm9yPyAgSSBoYWQgdXBkYXRlZCB0aGUgaG9z
dCB5ZXN0ZXJkYXkgdG8gNC4wLjUgYW5kIGRpZCByZWJvb3QgdGhlIHN5c3RlbSBteXNlbGYgc2V2
ZXJhbCB0aW1lcy4gIFRoZSBnb29kIG5ld3MgaXMgdGhhdCBJIHN1Y2NlZWRlZCBpbiBnZXR0aW5n
IG15IHNjcmlwdHMgd29ya2luZyBzbyBhbGwgbXkgVk1zIGNhbWUgYmFjay4gIEJ1dCBiZWZvcmUg
SSBnbyBhbmQgZGVwbG95IGFueXRoaW5nIG1pc3Npb24gY3JpdGljYWwgSSdkIGxpa2UgdG8gZGV0
ZXJtaW5lIHdoeSB0aGUgc3lzdGVtIGRlY2lkZWQgaXQgc2hvdWxkIHJlYm9vdC4KClN1Z2dlc3Rp
b25zPwoKLWRlcmVrCgpTZW50IGZyb20gbXkgbW9iaWxlIGRldmljZS4gUGxlYXNlIGV4Y3VzZSBh
bnkgdHlwb3MuCgo=
------=_Part_0_1480179727848
Content-Type: text/html;
charset=utf-8
Content-Transfer-Encoding: base64
Content-Disposition: inline
PGRpdiBzdHlsZT0iZm9udC1mYW1pbHk6ICdDYWxpYnJpJywgJ3NhbnMtc2VyaWYnOyI+PGRpdiBk
aXI9Imx0ciI+CjxkaXYgZGlyPSJsdHIiPkhpLDwvZGl2PjxkaXYgZGlyPSJsdHIiPkxhc3Qgbmln
aHQgYXQgYXBwcm94aW1hdGVseSAyMjowMyAoZ2l2ZSBvciB0YWtlIGEgZmV3IG1pbnV0ZXMpIG15
IHNpbmdsZS1zZXJ2ZXIgb3ZpcnQmbmJzcDtzeXN0ZW0gcmVib290ZWQuICZuYnNwO0kgc2F3IGEg
cm9vdCBsb2dpbiAocHJlc3VtYWJseSBmcm9tIHRoZSBlbmdpbmUpIGFuZCB0aGVuIHRoZSBob3N0
IHJlYm9vdGVkLjwvZGl2PjxkaXYgZGlyPSJsdHIiPjxicj48L2Rpdj48ZGl2IGRpcj0ibHRyIj5J
IGRvbid0IHNlZSBhbnl0aGluZyBzcGVjaWZpYyBpbiB0aGUgbG9ncyB0aGF0IHdvdWxkIGluZGlj
YXRlIFdIWSBpdCByZWJvb3RlZC48L2Rpdj48ZGl2IGRpcj0ibHRyIj48YnI+PC9kaXY+PGRpdiBk
aXI9Imx0ciI+QW55IHN1Z2dlc3Rpb25zIGZvciB3aGVyZSB0byBsb29rIG9yIHdoYXQgdG8gbG9v
ayBmb3I/ICZuYnNwO0kgaGFkIHVwZGF0ZWQgdGhlIGhvc3QgeWVzdGVyZGF5IHRvIDQuMC41IGFu
ZCBkaWQgcmVib290IHRoZSBzeXN0ZW0gbXlzZWxmIHNldmVyYWwgdGltZXMuICZuYnNwO1RoZSBn
b29kIG5ld3MgaXMgdGhhdCBJIHN1Y2NlZWRlZCBpbiBnZXR0aW5nIG15IHNjcmlwdHMgd29ya2lu
ZyBzbyBhbGwgbXkgVk1zJm5ic3A7Y2FtZSBiYWNrLiAmbmJzcDtCdXQgYmVmb3JlIEkgZ28gYW5k
IGRlcGxveSBhbnl0aGluZyBtaXNzaW9uIGNyaXRpY2FsIEknZCBsaWtlIHRvIGRldGVybWluZSB3
aHkgdGhlIHN5c3RlbSBkZWNpZGVkIGl0IHNob3VsZCByZWJvb3QuPC9kaXY+PGRpdiBkaXI9Imx0
ciI+PGJyPjwvZGl2PjxkaXYgZGlyPSJsdHIiPlN1Z2dlc3Rpb25zPzwvZGl2PjxkaXYgZGlyPSJs
dHIiPjxicj48L2Rpdj48ZGl2IGRpcj0ibHRyIj48ZGl2IGRpcj0ibHRyIj4tZGVyZWs8L2Rpdj48
ZGl2IGRpcj0ibHRyIj48YnI+PC9kaXY+PGRpdiBkaXI9Imx0ciI+U2VudCBmcm9tIG15IG1vYmls
ZSBkZXZpY2UuIFBsZWFzZSBleGN1c2UgYW55IHR5cG9zLjwvZGl2PjwvZGl2PgoKPC9kaXY+PC9k
aXY+PGJyPg==
------=_Part_0_1480179727848--
8 years, 1 month
IP address doesn't appear in the host network interfaces tab
by Nathanaël Blanchet
Hi all,
I try to change my migration network from one to an other. To do such a
thing I assigned an IP on the new vlan fo r each of my hosts by
dhcp.but when changing the migration network at the cluster level,
engine complains that my host doesn't have IP.
I tried the "Sync all networks tab" to refresh, but nothing happens.
Howver IP in the vlan do exist on the host.
The only workaround to do this is to restart vdsmd and then, IP appear
and I can go further.
Can it be considered as a bug?
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
8 years, 1 month
used bandwidth when live migrating
by Nathanaël Blanchet
Hi all,
We use in production a dedicated 10G vlan link for live migration. In
the cluster option tab I put the migration bandwidth limit to 10000
Mbps. Everything work as expected and now 25 vms on a host migrate in a
few seconds (excatly 13), but I'm not able to measure the real consumed
bandwidth. I want to evaluate such a thing beacuse my goal is to
dedicate a vlan for gluster ont the same 10G nic, and I don't want an
overload issue with gluster when vms migrations happen.
So my questions are : how does live migration work? Is it a RAM to RAM
transport between two hosts? Are migration bandwidth limited by I/O disk
anywhere or by the nic capabilities? Could 10Gbps be fully used for such
staff? What would you advice to make work gluster and migration on the
same nic (QoS?)
Thank you for help.
--
Nathanaël Blanchet
Supervision réseau
Pôle Infrastrutures Informatiques
227 avenue Professeur-Jean-Louis-Viala
34193 MONTPELLIER CEDEX 5
Tél. 33 (0)4 67 54 84 55
Fax 33 (0)4 67 54 84 14
blanchet(a)abes.fr
8 years, 1 month
4.0.5 create vm from template on cinder/ceph backed storage
by Jordan Conway
I've got a problem where since upgrading to 4.0.5 I am still unable to
create a vm from a template.
The issue now is that it seems to be stuck in a loop trying to and failing
to create the vm, which is making my logs explode with the following
https://paste.fedoraproject.org/489161/14799981/
And in the engine UI "Failed to complete VM fromtemplate-1 creation."
thousands of times.
The issue started I believe with this sql error
https://paste.fedoraproject.org/489183/00010131/
Any insight on how I can stop the persistent error would be appreciated, as
it is also preventng me from running engine-setup due to running jobs, even
though vdsclient shows nothing.
vdsClient -s 0 getAllTasksStatuses
{'status': {'message': 'OK', 'code': 0}, 'allTasksStatus': {}}
Thank you,
Jordan Conway
8 years, 1 month
Timout with large image uploads in ovirt 4.0.5
by Claude Durocher
------=_=-_OpenGroupware_org_NGMime-26209-1480022974.581403-1------
content-type: text/plain; charset=utf-8
content-length: 454
content-transfer-encoding: quoted-printable
When trying to upload large kvm images (more than 50 GB) with the uploa=
d option, we get timeouts. In the ovirt-imageio-proxy/image-proxy.log w=
e have:
ERROR 2016-11-24 16:18:51,211 session:293:root:(=5Fdecode=5Fovirt=5Ftic=
ket) Failed to verify proxy ticket: Ticket life time expired
We can recover and restart the upload but it is annoying to constantly =
watch the process. Is there a timeout parameter we can change in the im=
ageio-proxy?
------=_=-_OpenGroupware_org_NGMime-26209-1480022974.581403-1------
content-type: text/html; charset=utf-8
content-length: 489
content-transfer-encoding: quoted-printable
<html>When trying to upload large kvm images (more than 50 GB) with the=
upload option, we get timeouts. In the ovirt-imageio-proxy/image-proxy=
.log we have:<br /><br />ERROR 2016-11-24 16:18:51,211 session:293:root=
:(=5Fdecode=5Fovirt=5Fticket) Failed to verify proxy ticket: Ticket lif=
e time expired<br /><br />We can recover and restart the upload but it =
is annoying to constantly watch the process. Is there a timeout paramet=
er we can change in the imageio-proxy?</html>
------=_=-_OpenGroupware_org_NGMime-26209-1480022974.581403-1--------
8 years, 1 month
vm.conf on one of the node is missing
by knarra
Hi,
I have three nodes with glusterfs as storage domain. For some
reason i see that vm.conf from /var/run/ovirt-hosted-engine-ha is
missing and due to this on one of my host i see that Hosted Engine HA :
Not Active. Once i copy the file from some other node and restart
ovirt-ha-broker and ovirt-ha-agent services everything works fine. But
then this happens again. Can some please help me identify why this
happens. Below is the log i see in ovirt-ha-agent.logs.
https://paste.fedoraproject.org/489120/79990345/
Thanks
kasturi
8 years, 1 month
Re: [ovirt-users] Recommended Ovirt Implementation on Active-Active Datacenters (Site 1 and Site2) - Same Cluster
by Rogério Ceni Coelho
Thanks Roy. I will try.
Em qui, 24 de nov de 2016 às 13:01, Roy Golan <rgolan(a)redhat.com> escreveu:
> Affinity labels [1] will allow you to label the hosts and vms to site1 and
> site2 and that should be it.
>
> - create label per site
> - add the redpective label to each vm and host
>
> Unfortunately there is no UI for that but with SDK or rest it's easy
>
> [1] https://www.ovirt.org/blog/2016/07/affinity-labels/
>
> On Nov 24, 2016 3:12 PM, "Rogério Ceni Coelho" <
> rogeriocenicoelho(a)gmail.com> wrote:
>
> Hi Ovirt Jedi´s !!!
>
> First of all, congrats about the product !!! I love Ovirt !!!
>
> I am using Ovirt 4.0.4 with 10 hosts and 58 virtual machines on two
> Active-Active Datacenters using two EMC Vplex + two EMC VNX5500 + eight
> Dell Blades + 8 Dell PowerEdge M610 and two M620 Servers.
>
> Half servers are on Site 1 and Half servers on Site 2. The same with VMs.
> All Sites work as one and have redundant network, storage, power, etc etc
> etc ...
>
> I want to know what is the best way to set that VM number 1 runs on Site 1
> and VM number 2 runs on Site 2 ?
>
> On Vmware 5.1 we use DRS Group Manager and on Hyper-V we use Custom
> Properties on hosts and on VMs. What we use on oVirt without segregate on
> two different Datacenters or two different clusters ?
>
> Thanks in advance.
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
>
8 years, 1 month
EPEL and package(s) conflicts
by lejeczek
hi
I apologize if I missed it reading release(repo) note.
What are users supposed to do with EPEL repo?
I'm asking for hit this:
--> Package python-perf.x86_64 0:4.8.7-1.el7.elrepo will be
an update
--> Finished Dependency Resolution
Error: Package: nfs-ganesha-gluster-2.3.0-1.el7.x86_64
(@ovirt-4.0-centos-gluster37)
Requires: nfs-ganesha = 2.3.0-1.el7
Removing: nfs-ganesha-2.3.0-1.el7.x86_64
(@ovirt-4.0-centos-gluster37)
nfs-ganesha = 2.3.0-1.el7
Updated By: nfs-ganesha-2.3.2-1.el7.x86_64 (epel)
nfs-ganesha = 2.3.2-1.el7
and I also wander if there might be more?
regards.
L.
8 years, 1 month
ovirt 4.0.6 - live disk migration fails
by Maton, Brett
If I try to migrate a disk of a running VM to another storage domain it
fails with the following message:
Operation Cancelled
Error while executing action: User is not logged in.
Migrating disks of stopped VM's continues to work.
Probably a bug ?
8 years, 1 month
Storage questions
by Oscar Segarra
Hi,
I'm planning to deploy a scalable VDI infraestructure where each phisical
host can run over 100 VDIs and I'd like to deploy 10 physical hosts (1000
VDIs).
In order to avoid performance problems (replicating 1000VDIs changes over
gluster network I think can provoque performance problems) I have thought
to use local storage for VDI assuming that VDIs cannot be migrated between
phisical hosts.
¿Is my worry founded in terms of performance?
¿Is it possible to utilize local SSD storage for VDIs?
I'd like to configure a gluster volume for backup on rotational disks
(tiered+replica 2+ stripe=2) just to provide HA if a physical host fails.
¿Is it possible to use rsync for backing up VDIs?
If not ¿How can I sync/backup the VDIs running on local storage on the
gluster shared storage?
If a physical host fails ¿How can I start the latest backup of the VDI on
the shared gluster?
Thanks a lot
8 years, 1 month