Re: [ovirt-users] Migration from Proxmox 3.x to Ovirt
by Myles Wakeham
Nicolas writes:
> I would be glad too to ear about the way to do a 'one step VM migration'
> between two oVirt datacenters...
Hmmm... Maybe I'm making an assumption here about a feature that doesn't exist. In Proxmox, once you have defined a 'cluster' of hypervisors, and they achieve Quorum (e.g. they can all see each other), you can select a single HN (VM) and select to 'Migrate' to another hypervisor right from the web interface. When you process it, it takes a snapshot of the HN and moves it to the target hypervisor, and then brings it up on the target.
Is that not possible with oVirt?
Myles
=================
Myles Wakeham
Chief Technology Officer
Edgeneering LLC
http://www.edgeneering.com
Ph: +1-480-553-8940
Fax: +1-480-452-1979
9 years, 11 months
Migration from Proxmox 3.x to Ovirt
by Myles Wakeham
We are considering migrating a number of hypervisors from Proxmox 3.x to Ovirt and I was reaching out to see if anyone here had gone through this process and might have some war stories to share?
The bulk of our VMs are OpenVZ containers running Linux, but we have a handful of KVMs with Windows Server 2008. We've used the virtio drivers in those KVM servers. The biggest issue for us is the ridiculously complex clustering model with PM and we have multiple data centers with colocated servers in racks and some do not allow multicasting between them, forcing us to have to change out our VPNs between the servers. Our goal is to allow a 'one step migration' capability of VMs between data centers, which is a major effort to get setup with PM since v2.
If oVirt can help us achieve this, I'm all ears as I think we are ready to make this migration happen.
Thanks in advance for any suggestions or comments.
Myles
=================
Myles Wakeham
Chief Technology Officer
Edgeneering LLC
http://www.edgeneering.com
Ph: +1-480-553-8940
Fax: +1-480-452-1979
9 years, 11 months
Two new plugins for oVirt
by Lucas Vandroux
This is a multi-part message in MIME format.
------=_NextPart_5493C7CD_0A31C798_253BB12C
Content-Type: text/plain;
charset="utf-8"
Content-Transfer-Encoding: base64
RGVhciBhbGwsDQoNCg0KV2UgZGV2ZWxvcGVkIDIgbmV3IHBsdWdpbnMgZm9yIHRoZSBvVmly
dC1FbmdpbmUuDQoNCg0KVGhlIGZpcnN0IG9uZSBpcyB0byBpbnRlcmFjdCB3aXRoIHRoZSBl
bmdpbmUtbWFuYWdlLWRvbWFpbnMgdG9vbCBkaXJlY3RseSBmcm9tIFdlYkFkbWluOiBodHRw
czovL2dpdGh1Yi5jb20vb3ZpcnQtY2hpbmEvbWFuYWdlLWRvbWFpbnMtcGx1Z2luDQoNCg0K
VGhlIHNlY29uZCBvbmUgaXMgdG8gc2NoZWR1bGUgYXV0b21hdGljIGJhY2t1cHMgb2YgeW91
ciB2bXM6IGh0dHBzOi8vZ2l0aHViLmNvbS9vdmlydC1jaGluYS92bS1iYWNrdXAtc2NoZWR1
bGVyDQoNCg0KTWF5YmUgdGhleSBjYW4gaGVscCB5b3UuDQoNCg0KQmVzdCByZWdhcmRzLA0K
DQoNCkx1Y2FzIFZhbmRyb3V4IGZvciB0aGUgb1ZpcnQtQ2hpbmEgVGVhbSAoaHR0cDovL292
aXJ0LWNoaW5hLm9yZy8p
------=_NextPart_5493C7CD_0A31C798_253BB12C
Content-Type: text/html;
charset="utf-8"
Content-Transfer-Encoding: base64
PGRpdj5EZWFyIGFsbCw8L2Rpdj48ZGl2Pjxicj48L2Rpdj48ZGl2PjxzcGFuIHN0eWxlPSJj
b2xvcjogcmdiKDIwLCAyNCwgMzUpOyBmb250LWZhbWlseTogSGVsdmV0aWNhLCBBcmlhbCwg
J2x1Y2lkYSBncmFuZGUnLCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlmOyBs
aW5lLWhlaWdodDogMTkuMzE5OTk5Njk0ODI0MnB4OyI+V2UgZGV2ZWxvcGVkIDIgbmV3IHBs
dWdpbnMgZm9yIHRoZSBvVmlydC1FbmdpbmUuPC9zcGFuPjwvZGl2PjxkaXY+PHNwYW4gc3R5
bGU9ImNvbG9yOiByZ2IoMjAsIDI0LCAzNSk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2EsIEFy
aWFsLCAnbHVjaWRhIGdyYW5kZScsIHRhaG9tYSwgdmVyZGFuYSwgYXJpYWwsIHNhbnMtc2Vy
aWY7IGxpbmUtaGVpZ2h0OiAxOS4zMTk5OTk2OTQ4MjQycHg7Ij48YnI+PC9zcGFuPjwvZGl2
PjxkaXY+PHNwYW4gc3R5bGU9ImNvbG9yOiByZ2IoMjAsIDI0LCAzNSk7IGZvbnQtZmFtaWx5
OiBIZWx2ZXRpY2EsIEFyaWFsLCAnbHVjaWRhIGdyYW5kZScsIHRhaG9tYSwgdmVyZGFuYSwg
YXJpYWwsIHNhbnMtc2VyaWY7IGxpbmUtaGVpZ2h0OiAxOS4zMTk5OTk2OTQ4MjQycHg7Ij5U
aGUgZmlyc3Qgb25lIGlzJm5ic3A7PC9zcGFuPjxzcGFuIHN0eWxlPSJjb2xvcjogcmdiKDIw
LCAyNCwgMzUpOyBmb250LWZhbWlseTogSGVsdmV0aWNhLCBBcmlhbCwgJ2x1Y2lkYSBncmFu
ZGUnLCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlmOyBsaW5lLWhlaWdodDog
MTkuMzE5OTk5Njk0ODI0MnB4OyI+dG8gaW50ZXJhY3Qgd2l0aCB0aGUgZW5naW5lLW1hbmFn
ZS1kb21haW5zIHRvb2wgZGlyZWN0bHkgZnJvbSBXZWJBZG1pbjombmJzcDs8L3NwYW4+PGZv
bnQgY29sb3I9IiMxNDE4MjMiIGZhY2U9IkhlbHZldGljYSwgQXJpYWwsIGx1Y2lkYSBncmFu
ZGUsIHRhaG9tYSwgdmVyZGFuYSwgYXJpYWwsIHNhbnMtc2VyaWYiPjxzcGFuIHN0eWxlPSJs
aW5lLWhlaWdodDogMTkuMzE5OTk5Njk0ODI0MnB4OyI+aHR0cHM6Ly9naXRodWIuY29tL292
aXJ0LWNoaW5hL21hbmFnZS1kb21haW5zLXBsdWdpbjwvc3Bhbj48L2ZvbnQ+PC9kaXY+PGRp
dj48Zm9udCBjb2xvcj0iIzE0MTgyMyIgZmFjZT0iSGVsdmV0aWNhLCBBcmlhbCwgbHVjaWRh
IGdyYW5kZSwgdGFob21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZiI+PHNwYW4gc3R5
bGU9ImxpbmUtaGVpZ2h0OiAxOS4zMTk5OTk2OTQ4MjQycHg7Ij48YnI+PC9zcGFuPjwvZm9u
dD48L2Rpdj48ZGl2Pjxmb250IGNvbG9yPSIjMTQxODIzIiBmYWNlPSJIZWx2ZXRpY2EsIEFy
aWFsLCBsdWNpZGEgZ3JhbmRlLCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlm
Ij48c3BhbiBzdHlsZT0ibGluZS1oZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPlRoZSBz
ZWNvbmQgb25lIGlzIHRvIHNjaGVkdWxlIGF1dG9tYXRpYyBiYWNrdXBzIG9mIHlvdXIgdm1z
OiZuYnNwO2h0dHBzOi8vZ2l0aHViLmNvbS9vdmlydC1jaGluYS92bS1iYWNrdXAtc2NoZWR1
bGVyPC9zcGFuPjwvZm9udD48L2Rpdj48ZGl2Pjxmb250IGNvbG9yPSIjMTQxODIzIiBmYWNl
PSJIZWx2ZXRpY2EsIEFyaWFsLCBsdWNpZGEgZ3JhbmRlLCB0YWhvbWEsIHZlcmRhbmEsIGFy
aWFsLCBzYW5zLXNlcmlmIj48c3BhbiBzdHlsZT0ibGluZS1oZWlnaHQ6IDE5LjMxOTk5OTY5
NDgyNDJweDsiPjxicj48L3NwYW4+PC9mb250PjwvZGl2PjxkaXY+PHNwYW4gc3R5bGU9ImNv
bG9yOiByZ2IoMjAsIDI0LCAzNSk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2EsIEFyaWFsLCAn
bHVjaWRhIGdyYW5kZScsIHRhaG9tYSwgdmVyZGFuYSwgYXJpYWwsIHNhbnMtc2VyaWY7IGxp
bmUtaGVpZ2h0OiAxOS4zMTk5OTk2OTQ4MjQycHg7Ij5NYXliZSB0aGV5IGNhbiBoZWxwIHlv
dS48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyMCwgMjQsIDM1
KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYSwgQXJpYWwsICdsdWNpZGEgZ3JhbmRlJywgdGFo
b21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZjsgbGluZS1oZWlnaHQ6IDE5LjMxOTk5
OTY5NDgyNDJweDsiPjxicj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6
IHJnYigyMCwgMjQsIDM1KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYSwgQXJpYWwsICdsdWNp
ZGEgZ3JhbmRlJywgdGFob21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZjsgbGluZS1o
ZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPkJlc3QgcmVnYXJkcyw8L3NwYW4+PC9kaXY+
PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyMCwgMjQsIDM1KTsgZm9udC1mYW1pbHk6
IEhlbHZldGljYSwgQXJpYWwsICdsdWNpZGEgZ3JhbmRlJywgdGFob21hLCB2ZXJkYW5hLCBh
cmlhbCwgc2Fucy1zZXJpZjsgbGluZS1oZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPjxi
cj48L3NwYW4+PC9kaXY+PGRpdj48c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyMCwgMjQsIDM1
KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYSwgQXJpYWwsICdsdWNpZGEgZ3JhbmRlJywgdGFo
b21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZjsgbGluZS1oZWlnaHQ6IDE5LjMxOTk5
OTY5NDgyNDJweDsiPkx1Y2FzIFZhbmRyb3V4IGZvciB0aGUgb1ZpcnQtQ2hpbmEgVGVhbSAo
PC9zcGFuPjxmb250IGNvbG9yPSIjMTQxODIzIiBmYWNlPSJIZWx2ZXRpY2EsIEFyaWFsLCBs
dWNpZGEgZ3JhbmRlLCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlmIj48c3Bh
biBzdHlsZT0ibGluZS1oZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPmh0dHA6Ly9vdmly
dC1jaGluYS5vcmcvPC9zcGFuPjwvZm9udD48c3BhbiBzdHlsZT0iY29sb3I6IHJnYigyMCwg
MjQsIDM1KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYSwgQXJpYWwsICdsdWNpZGEgZ3JhbmRl
JywgdGFob21hLCB2ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZjsgbGluZS1oZWlnaHQ6IDE5
LjMxOTk5OTY5NDgyNDJweDsiPik8L3NwYW4+PC9kaXY+PGRpdj48Zm9udCBjb2xvcj0iIzE0
MTgyMyIgZmFjZT0iSGVsdmV0aWNhLCBBcmlhbCwgbHVjaWRhIGdyYW5kZSwgdGFob21hLCB2
ZXJkYW5hLCBhcmlhbCwgc2Fucy1zZXJpZiI+PHNwYW4gc3R5bGU9ImxpbmUtaGVpZ2h0OiAx
OS4zMTk5OTk2OTQ4MjQycHg7Ij48YnI+PC9zcGFuPjwvZm9udD48L2Rpdj48ZGl2Pjxmb250
IGNvbG9yPSIjMTQxODIzIiBmYWNlPSJIZWx2ZXRpY2EsIEFyaWFsLCBsdWNpZGEgZ3JhbmRl
LCB0YWhvbWEsIHZlcmRhbmEsIGFyaWFsLCBzYW5zLXNlcmlmIj48c3BhbiBzdHlsZT0ibGlu
ZS1oZWlnaHQ6IDE5LjMxOTk5OTY5NDgyNDJweDsiPjxicj48L3NwYW4+PC9mb250PjwvZGl2
PjxkaXY+PGluY2x1ZGV0YWlsPjwhLS08IVtlbmRpZl0tLT48L2luY2x1ZGV0YWlsPjwvZGl2
Pg==
------=_NextPart_5493C7CD_0A31C798_253BB12C--
9 years, 11 months
SPM host and snapshot deletion
by Demeter Tibor
------=_Part_8476872_1620595366.1419319418351
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
I have an ovirt 3.5 with glusterfs and three nodes. centos 6.5 and glusterfs 3.5.2.
When I do a snapshot deletion on stopped VM, then on the SPM host eats all of virtual memory and whit this kill all of running VMs that is running on SPM host.
It is a very big problem because I need to delete a lot of snapshot.
In this case I need to powereing off these VMs because there are no other options for stopping.
I've try out the live migration, but in this case the live migration does not working.
is it a know bug?
Thanks in advance.
Tibor
------=_Part_8476872_1620595366.1419319418351
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html><body><div style="font-family: times new roman, new york, times, serif; font-size: 12pt; color: #000000"><div>Hi,</div><div>I have an ovirt 3.5 with glusterfs and three nodes. centos 6.5 and glusterfs 3.5.2.</div><div>When I do a snapshot deletion on stopped VM, then on the SPM host eats all of virtual memory and whit this kill all of running VMs that is running on SPM host.</div><div>It is a very big problem because I need to delete a lot of snapshot.</div><div>In this case I need to powereing off these VMs because there are no other options for stopping. </div><div>I've try out the live migration, but in this case the live migration does not working.</div><div><br></div><div>is it a know bug?</div><div><br></div><div>Thanks in advance.</div><div>Tibor</div></div></body></html>
------=_Part_8476872_1620595366.1419319418351--
9 years, 11 months
oVirt bonding mode4 + cisco 2960 XR
by Алексей Николаев
Hi, community!
I have made bond0 mode4 (eth0+eth1+eth2+eth3) by oVirt portal. It's work
well on CentOS 7 node.
How I can setup my cisco 2960 XR switch for work with this bond0 for
loadbalancing + aggregation (802.3ad)?
9 years, 11 months
Centos 7 guest on rhev 3.4
by Jakub Bittner
Hello,
we are running rhev 3.4 and we installed Centos 7.0 as guest with KDE
gui. When I connect to this guest via spice I can see desktop, but it
doesnt resize. I have to manually change resolution inside guest (Centos
7). I have ovirt-agent and vd-agent installed and running in guest. We
are connecting from Centos 7. (Resizing in Centos 6 works as expected)
In Centos 7 guest we have:
spice-server-0.12.4-5.el7_0.1.x86_64
spice-gtk3-0.20-8.el7.x86_64
spice-glib-0.20-8.el7.x86_64
spice-xpi-2.8-5.el7.x86_64
spice-vdagent-0.14.0-7.el7.x86_64
ovirt-guest-agent-common-1.0.10-2.el7.noarch
In Centos 7 from which we connects:
virt-viewer-0.5.7-7.el7.x86_64
spice-gtk3-0.20-8.el7.x86_64
spice-vdagent-0.14.0-7.el7.x86_64
spice-xpi-2.8-5.el7.x86_64
spice-glib-0.20-8.el7.x86_64
Maybe I should install some package in to guest, but I dont know which.
Thanks for help.
9 years, 11 months
Replace Failed Master Data Domain
by Jerry Champlin
List:
What is the process for replacing a master data domain. The storage
attached to this was corrupted and has been replaced. We need to get the
data domain back. Any pointers greatly appreciated.
-Jerry
Jerry Champlin
Absolute Performance Inc.
Phone: 303-565-4401
--
Enabling businesses to deliver critical applications at lower cost and
higher value to their customers.
NON-DISCLOSURE NOTICE: This communication including any and all
attachments is for the intended recipient(s) only and may contain
confidential and privileged information. If you are not the intended
recipient of this communication, any disclosure, copying further
distribution or use of this communication is prohibited. If you received
this communication in error, please contact the sender and delete/destroy
all copies of this communication immediately.
9 years, 11 months
Change host hostname/ip
by Zenon D'Elee
Hello,
I have a hosted ovirt infra with only one host and one storage. I don't
know how to change the host ip address in the ovirt web manager (grey). Do
I have to use the "ovirt-engine-rename" command line on the ovirt vm ? Do I
have to put the host in maintenance ?
Thank you for helping.
9 years, 11 months
Re: [ovirt-users] Strange error messages
by Timothy Asir Jeyasingh
------=_Part_288197_798400652.1419246610865
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Ovirt stores the hook files at the first time it discovers
any gluster hook file in the specific path (/var/lib/glusterd/hooks/1) in the node
and keeps checking the hooks file across every node with its master copy
and returns a conflict message whenever it finds any differences.
To resolve this conflict, Ovirt UI provides a resolve option.
Using which one can copy the master hook file to every node or
can select any particular host's hook file to be copied across every nodes
and update the Ovirt's master copy.
Regards,
Timothy
----- Original Message -----
> -------- Original Message --------
> Subject: Re: [ovirt-users] Strange error messages
> Date: Mon, 17 Nov 2014 18:21:20 +0530
> From: knarra <knarra(a)redhat.com>
> To: Demeter Tibor <tdemeter(a)itsmart.hu> , "users(a)ovirt.org List"
> <users(a)ovirt.org>
> On 11/17/2014 06:14 PM, Demeter Tibor wrote:
> > Hi,
>
> > Meanwhile this happening in every two hours.
>
> > For example 09:21, 11:21, 13:21
>
> > Anybody can me help?
>
> > Thanks,
>
> > Tibor
>
> This happens because the time for syncing the hooks from node to engine has
> been configured for two hours.
> > ----- Original Message -----
>
> > > Hi,
> >
>
> > > In this morning I have got a lot of similar messages to console:
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook set-POST-30samba-set.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook stop-PRE-29CTDB-teardown.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook add-brick-PRE-28Quota-enable-root-xattr-heal.sh
> > > of
> > > Cluster r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook set-POST-31ganesha-set.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook start-POST-30samba-start.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook reset-POST-31ganesha-reset.sh of Cluster
> > > r710cluster1.
> >
>
> > > 2014-Nov-17, 03:21
> >
>
> > > Detected conflict in hook
> > > gsync-create-POST-56glusterd-geo-rep-create-post.sh
> > > of Cluster r710cluster1.
> >
>
> > > What does this mean?
> >
>
> > > The system seems to be working.
> >
>
> > > Thanks:
> >
>
> > > Tibor
> >
>
> > > _______________________________________________
> >
>
> > > Users mailing list
> >
>
> > > Users(a)ovirt.org
> >
>
> > > http://lists.ovirt.org/mailman/listinfo/users
> >
>
> > _______________________________________________
>
> > Users mailing list Users(a)ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
>
------=_Part_288197_798400652.1419246610865
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>Ovirt stores the hook files at t=
he first time it discovers<br>any gluster hook file in the specific path (/=
var/lib/glusterd/hooks/1) in the node<br>and keeps checking the hooks file =
across every node with its master copy<br>and returns a conflict message wh=
enever it finds any differences.<br><br>To resolve this conflict, Ovirt UI =
provides a resolve option.<br>Using which one can copy the master hook file=
to every node or<br>can select any particular host's hook file to be copie=
d across every nodes<br>and update the Ovirt's master copy.</div><div><br><=
/div><div>Regards,<br></div><div>Timothy<br></div><div><br></div><hr id=3D"=
zwchr"><blockquote style=3D"border-left:2px solid #1010FF;margin-left:5px;p=
adding-left:5px;color:#000;font-weight:normal;font-style:normal;text-decora=
tion:none;font-family:Helvetica,Arial,sans-serif;font-size:12pt;"><br><div =
class=3D"moz-forward-container">
-------- Original Message --------
<table class=3D"moz-email-headers-table mceItemTable" border=3D"0" ce=
llpadding=3D"0" cellspacing=3D"0">
<tbody>
<tr>
<th align=3D"RIGHT" nowrap=3D"nowrap" valign=3D"BASELINE">Subje=
ct:
</th>
<td>Re: [ovirt-users] Strange error messages</td>
</tr>
<tr>
<th align=3D"RIGHT" nowrap=3D"nowrap" valign=3D"BASELINE">Date:=
</th>
<td>Mon, 17 Nov 2014 18:21:20 +0530</td>
</tr>
<tr>
<th align=3D"RIGHT" nowrap=3D"nowrap" valign=3D"BASELINE">From:=
</th>
<td>knarra <a class=3D"moz-txt-link-rfc2396E" href=3D"mailto:kn=
arra(a)redhat.com" target=3D"_blank"><knarra(a)redhat.com></a><br data-mc=
e-bogus=3D"1"></td>
</tr>
<tr>
<th align=3D"RIGHT" nowrap=3D"nowrap" valign=3D"BASELINE">To: <=
/th>
<td>Demeter Tibor <a class=3D"moz-txt-link-rfc2396E" href=3D"ma=
ilto:tdemeter@itsmart.hu" target=3D"_blank"><tdemeter(a)itsmart.hu></a>=
,
<a class=3D"moz-txt-link-rfc2396E" href=3D"mailto:users@ovirt=
.orgList" target=3D"_blank">"users(a)ovirt.org List"</a> <a class=3D"moz-txt-=
link-rfc2396E" href=3D"mailto:users@ovirt.org" target=3D"_blank"><users@=
ovirt.org></a><br data-mce-bogus=3D"1"></td>
</tr>
</tbody>
</table>
<br>
<br>
=20
<div class=3D"moz-cite-prefix">On 11/17/2014 06:14 PM, Demeter Tibor
wrote:<br>
</div>
<blockquote cite=3D"mid:1832467516.3984213.1416228270346.JavaMail.zim=
bra(a)itsmart.hu">
<div style=3D"font-family: times new roman, new york, times,
serif; font-size: 12pt; color: #000000">
<div>Hi,<br>
</div>
<div><br>
</div>
<div>Meanwhile this happening in every two hours.</div>
<div>For example 09:21, 11:21, 13:21</div>
<div><br>
</div>
<div>Anybody can me help?</div>
<div><br>
</div>
<div>Thanks,</div>
<div><br>
</div>
<div>Tibor</div>
</div>
</blockquote>
<br>
This happens because the time for syncing the hooks from node to
engine has been configured for two hours. <br>
<br>
<blockquote cite=3D"mid:1832467516.3984213.1416228270346.JavaMail.zim=
bra(a)itsmart.hu">
<div style=3D"font-family: times new roman, new york, times,
serif; font-size: 12pt; color: #000000">
<div><br>
</div>
<hr id=3D"zwchr">
<blockquote style=3D"border-left:2px solid
#1010FF;margin-left:5px;padding-left:5px;color:#000;font-weight:normal;font=
-style:normal;text-decoration:none;font-family:Helvetica,Arial,sans-serif;f=
ont-size:12pt;">
<div style=3D"font-family: times new roman, new york, times,
serif; font-size: 12pt; color: #000000">
<div>Hi, <br>
</div>
<div><br>
</div>
<div>In this morning I have got a lot of similar messages
to console: <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook set-POST-30samba-set.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook stop-PRE-29CTDB-teardown.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook
add-brick-PRE-28Quota-enable-root-xattr-heal.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook set-POST-31ganesha-set.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook start-POST-30samba-start.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook reset-POST-31ganesha-reset.sh
of Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div> <br>
2014-Nov-17, 03:21 <br>
<br>
Detected conflict in hook
gsync-create-POST-56glusterd-geo-rep-create-post.sh of
Cluster r710cluster1. <br>
</div>
<div><br>
</div>
<div><br>
</div>
<div><br>
</div>
<div>What does this mean? <br>
</div>
<div><br>
</div>
<div>The system seems to be working. <br>
</div>
<div><br>
</div>
<div>Thanks: <br>
</div>
<div><br>
</div>
<div>Tibor<br>
</div>
<div><br>
</div>
<div><br>
</div>
</div>
<br>
_______________________________________________<br>
Users mailing list<br>
<a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovir=
t.org" target=3D"_blank">Users(a)ovirt.org</a><br>
<a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.o=
rg/mailman/listinfo/users" target=3D"_blank">http://lists.ovirt.org/mailman=
/listinfo/users</a><br>
</blockquote>
<div><br>
</div>
</div>
<br>
<fieldset class=3D"mimeAttachmentHeader"></fieldset>
<br>
<pre>_______________________________________________
Users mailing list
<a class=3D"moz-txt-link-abbreviated" href=3D"mailto:Users@ovirt.org" targe=
t=3D"_blank">Users(a)ovirt.org</a>
<a class=3D"moz-txt-link-freetext" href=3D"http://lists.ovirt.org/mailman/l=
istinfo/users" target=3D"_blank">http://lists.ovirt.org/mailman/listinfo/us=
ers</a>
</pre>
</blockquote>
<br>
<br>
</div>
<br>
=20
</blockquote><div><br></div></div></body></html>
------=_Part_288197_798400652.1419246610865--
9 years, 11 months