Host installation failed. Unexpected connection termination.
by Punit Dambiwal
Hi,
I have successfully installed ovirt engine 3.4.3...but when i try to add
host in the cluster it failed with the following error :-
Host compute1 installation failed. Unexpected connection termination.
Engine Log :-
----------------
2014-07-26 17:18:37,140 INFO [org.ovirt.engine.core.bll.InstallerMessages]
(org.ovirt.thread.pool-6-thread-20) [40582e53] Installation 43.252.176.13:
Connected to host 43.252.176.13 with SSH key fingerprint:
1e:38:88:c3:20:0f:cb:08:6c:ae:cb:87:12:c1:01:50
2014-07-26 17:18:37,165 INFO
[org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
(org.ovirt.thread.pool-6-thread-20) [40582e53] Correlation ID: 40582e53,
Call Stack: null, Custom Event ID: -1, Message: Installing Host compute1.
Connected to host 43.252.176.13 with SSH key fingerprint:
1e:38:88:c3:20:0f:cb:08:6c:ae:cb:87:12:c1:01:50.
2014-07-26 17:18:37,194 INFO [org.ovirt.engine.core.bll.VdsDeploy]
(org.ovirt.thread.pool-6-thread-20) [40582e53] Installation of
43.252.176.13. Executing command via SSH umask 0077; MYTMP="$(mktemp -t
ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null 2>&1; rm
-fr \"${MYTMP}\" > /dev/null 2>&1" 0; rm -fr "${MYTMP}" && mkdir "${MYTMP}"
&& tar --warning=no-timestamp -C "${MYTMP}" -x && "${MYTMP}"/setup
DIALOG/dialect=str:machine DIALOG/customization=bool:True <
/var/cache/ovirt-engine/ovirt-host-deploy.tar
2014-07-26 17:18:37,201 INFO [org.ovirt.engine.core.utils.ssh.SSHDialog]
(org.ovirt.thread.pool-6-thread-20) SSH execute root(a)43.252.176.13 'umask
0077; MYTMP="$(mktemp -t ovirt-XXXXXXXXXX)"; trap "chmod -R u+rwX
\"${MYTMP}\" > /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; rm
-fr "${MYTMP}" && mkdir "${MYTMP}" && tar --warning=no-timestamp -C
"${MYTMP}" -x && "${MYTMP}"/setup DIALOG/dialect=str:machine
DIALOG/customization=bool:True'
2014-07-26 17:18:39,871 INFO [org.ovirt.engine.core.vdsbroker.VdsManager]
(DefaultQuartzScheduler_Worker-33) Initializing Host: compute1
2014-07-26 17:19:11,798 INFO
[org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService]
(DefaultQuartzScheduler_Worker-18) No up server in cluster
2014-07-26 17:19:11,799 ERROR
[org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob]
(DefaultQuartzScheduler_Worker-18) Error updating tasks from CLI:
org.ovirt.engine.core.common.errors.VdcBLLException: VdcBLLException:
NO_UP_SERVER_FOUND (Failed with error NO_UP_SERVER_FOUND and code 7000)
at
org.ovirt.engine.core.bll.gluster.tasks.GlusterTasksService.getTaskListForCluster(GlusterTasksService.java:30)
[bll.jar:]
at
org.ovirt.engine.core.bll.gluster.GlusterTasksSyncJob.updateGlusterAsyncTasks(GlusterTasksSyncJob.java:84)
[bll.jar:]
at sun.reflect.GeneratedMethodAccessor41.invoke(Unknown Source)
[:1.7.0_65]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
[rt.jar:1.7.0_65]
at java.lang.reflect.Method.invoke(Method.java:606)
[rt.jar:1.7.0_65]
at
org.ovirt.engine.core.utils.timer.JobWrapper.execute(JobWrapper.java:60)
[scheduler.jar:]
at org.quartz.core.JobRunShell.run(JobRunShell.java:213)
[quartz.jar:]
at
org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:557)
[quartz.jar:]
------------------
Thanks,
Punit
10 years, 4 months
RHEV - Hypervisor directory structure.
by santosh
This is a multi-part MIME message.
--=_reb-r6F64C883-t53D29588
Content-Type: text/plain; charset="ISO-8859-1"; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I am trying to understand the directory structure on the RHEV hypervisor.
Below is the part of the directory tree on hypervisor.
|root@XYZ dom_md] tree /rhev/data-center/51a24440-6a1f-48f0-8306-92455fe7aaa1/mastersd/
/rhev/data-center/51a24440-6a1f-48f0-8306-92455fe7aaa1/mastersd/
??? dom_md
? ??? ids -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/ids
? ??? inbox -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/inbox
? ??? leases -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/leases
? ??? master -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/master
? ??? metadata -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/metadata
? ??? outbox -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/outbox
??? images
? ??? 7f0be608-0251-4125-a3a1-b4e74bbcaa34
? ? ??? 53596e07-0317-43b3-838a-13cde56ce1c8 -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/53596e07-0317-43b3-838a-13cde56ce1c8
? ??? aa6b4787-271f-4651-98c8-97054ff4418d
? ? ??? 22961431-c139-4311-bc78-c4f5a58cfda7 -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/22961431-c139-4311-bc78-c4f5a58cfda7
? ? ??? f5f5f1ff-af71-4d11-a15a-dbc863e5d6f7 -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/f5f5f1ff-af71-4d11-a15a-dbc863e5d6f7
? ??? e4f70c9e-c5b3-4cbe-a755-684d6a86026f
? ??? 8c2f5f05-c109-45e7-af98-c54437ad5d9e -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/8c2f5f05-c109-45e7-af98-c54437ad5d9e
??? master
??? lost+found
??? tasks
??? vms
??? 1ba7645c-79db-403a-95f6-b3078e441b86
? ??? 1ba7645c-79db-403a-95f6-b3078e441b86.ovf
??? 6df2c080-a0d5-4202-8f09-ed719184f667
??? 6df2c080-a0d5-4202-8f09-ed719184f667.ovf|
I am trying to understand the *ids, inbox, leases, master, metadata and
outbox* device files above.
I would appreciate any pointer to get this information.
Thanks.
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************
--=_reb-r6F64C883-t53D29588
Content-Type: text/html; charset="ISO-8859-1"
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
</head>
<body bgcolor="#FFFFFF" text="#000000">
Hi,<br>
<br>
I am trying to understand the directory structure on the RHEV
hypervisor. <br>
Below is the part of the directory tree on hypervisor.<br>
<br>
<br>
<blockquote>
<pre><font color="#000099"><big><code>root@XYZ dom_md] tree /rhev/data-center/51a24440-6a1f-48f0-8306-92455fe7aaa1/mastersd/
/rhev/data-center/51a24440-6a1f-48f0-8306-92455fe7aaa1/mastersd/
├── dom_md
│ ├── ids -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/ids
│ ├── inbox -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/inbox
│ ├── leases -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/leases
│ ├── master -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/master
│ ├── metadata -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/metadata
│ └── outbox -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/outbox
├── images
│ ├── 7f0be608-0251-4125-a3a1-b4e74bbcaa34
│ │ └── 53596e07-0317-43b3-838a-13cde56ce1c8 -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/53596e07-0317-43b3-838a-13cde56ce1c8
│ ├── aa6b4787-271f-4651-98c8-97054ff4418d
│ │ ├── 22961431-c139-4311-bc78-c4f5a58cfda7 -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/22961431-c139-4311-bc78-c4f5a58cfda7
│ │ └── f5f5f1ff-af71-4d11-a15a-dbc863e5d6f7 -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/f5f5f1ff-af71-4d11-a15a-dbc863e5d6f7
│ └── e4f70c9e-c5b3-4cbe-a755-684d6a86026f
│ └── 8c2f5f05-c109-45e7-af98-c54437ad5d9e -> /dev/f86ee04c-d84d-4032-a3f8-c77ed1ad29ea/8c2f5f05-c109-45e7-af98-c54437ad5d9e
└── master
├── lost+found
├── tasks
└── vms
├── 1ba7645c-79db-403a-95f6-b3078e441b86
│ └── 1ba7645c-79db-403a-95f6-b3078e441b86.ovf
└── 6df2c080-a0d5-4202-8f09-ed719184f667
└── 6df2c080-a0d5-4202-8f09-ed719184f667.ovf</code></big></font></pre>
</blockquote>
<br>
<p>I am trying to understand the <strong>ids, inbox, leases,
master, metadata and outbox</strong> device files above. </p>
<p>I would appreciate any pointer to get this information. </p>
<p>Thanks.</p>
</body>
</html>
<pre>
***************************Legal Disclaimer***************************
"This communication may contain confidential and privileged material for the
sole use of the intended recipient. Any unauthorized review, use or distribution
by others is strictly prohibited. If you have received the message by mistake,
please advise the sender by reply email and delete the message. Thank you."
**********************************************************************</pre>
--=_reb-r6F64C883-t53D29588--
10 years, 4 months
Glusterfs HA with Ovirt
by Punit Dambiwal
Hi,
I have some HA related concern about glusterfs with Ovirt...let say i have
4 storage node with gluster bricks as below :-
1. 10.10.10.1 to 10.10.10.4 with 2 bricks each and i have distributed
replicated architecture...
2. Now attached this gluster storge to ovrit-engine with the following
mount point 10.10.10.2/vol1
3. In my cluster i have 3 hypervisior hosts (10.10.10.5 to 10.10.10.7) SPM
is on 10.10.10.5...
4. What happen if 10.10.10.2 will goes down.....can hypervisior host can
still access the storage ??
5. What happen if SPM goes down ???
Note :- What happen for point 4 &5 ,If storage and Compute both working on
the same server.
Thanks,
Punit
10 years, 4 months
Documentation of cloud-init
by Amedeo Salvati
--_=__=_XaM3_.1406285961.2A.813187.42.17781.52.42.007.867559545
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hy guys,=0A=0Aafter some headache I was able to use cloud-init via python=
-sdk (thanks to Juan), and I hope no one will fight anymore with them :D,=
so if you want I think it's better to document with a simple example it'=
s use on web page available at:=0A=0Ahttp://www.ovirt.org/Features/Cloud-=
Init_Integration=0A=0Abelow simple change that you can integrate on web p=
age:=0A=0A- fix api design example of usage for files xml, on the web pag=
e you can find:=0A=0A...=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <files>=0A=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <file>=0A=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <name>/tmp/testFile1.txt</n=
ame>=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <cont=
ent>temp content</content>=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 <type>PLAINTEXT</type>=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 </file>=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 </files>=0A=
...=0A=0Abut on params.File there aren't any "type" parameter only "type_=
" so you can change xml with:=0A=0A...=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 <files>=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <file>=0A=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 <name>/tmp/test=
File1.txt</name>=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 <content>temp content</content>=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 <type_>PLAINTEXT</type_>=0A=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 </file>=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 </files>=0A...=0A=0A- insert an example of using cloud-init via pytho=
n-sdk (I hope java-sdk haven't big differences).=0A=0Afor this you can in=
sert on web page an example of setting via cloud-init: hostname, reset ro=
ot password and write a simple text file, and finally simple python code =
is:=0A=0A...=0Ascontent =3D "write_files:\n-=C2=A0=C2=A0 content: |\n=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 #simple file\n=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 PIPPO=3D\"ciao\"\n=C2=A0=C2=A0=C2=A0 path: /etc/pippo.=
txt"=0Aaction =3D params.Action(=0A=C2=A0 vm=3Dparams.VM(=0A=C2=A0=C2=A0=C2=
=A0 initialization=3Dparams.Initialization(=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 cloud_init=3Dparams.CloudInit(=0A=C2=A0=C2=A0 =C2=A0host=3Dparams.Hos=
t(address=3D"rheltest029"),=0A=C2=A0=C2=A0 =C2=A0users=3Dparams.Users(=0A=
=C2=A0=C2=A0 =C2=A0=C2=A0 user=3D[params.User(user_name=3D"root", passwor=
d=3D"pippolo")]=0A=C2=A0=C2=A0 =C2=A0=C2=A0 ),=0A=C2=A0=C2=A0 =C2=A0files=
=3Dparams.Files(=0A=C2=A0=C2=A0 =C2=A0=C2=A0 file=3D[params.File(name=3D"=
/etc/pippo.txt", content=3Dscontent, type_=3D"PLAINTEXT")]=0A=C2=A0=C2=A0=
=C2=A0=C2=A0 )=0A=C2=A0=C2=A0 =C2=A0)=0A=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 )=
=0A=C2=A0 )=0A)=0Avm.start( action )=0A...=0A=0AHTH=0AAmedeo Salvati
--_=__=_XaM3_.1406285961.2A.813187.42.17781.52.42.007.867559545
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
=0A<div class=3D"xam_msg_class">=0A<div style=3D"font: normal 13px Arial;=
color:rgb(31, 28, 27);">Hy guys,<br><br>after some headache I was able t=
o use cloud-init via python-sdk (thanks to Juan), and I hope no one will =
fight anymore with them :D, so if you want I think it's better to documen=
t with a simple example it's use on web page available at:<br><br>http://=
www.ovirt.org/Features/Cloud-Init_Integration<br><br>below simple change =
that you can integrate on web page:<br><br>- fix api design example of us=
age for files xml, on the web page you can find:<br><br>...<br> &nbs=
p; <files><br>  =
; <file><br> &=
nbsp; <name>/tmp/testFile1.txt</name><br>&n=
bsp; <content>=
;temp content</content><br> &nbs=
p; <type>PLAINTEXT</type><br> &n=
bsp; </file><br> &nb=
sp; </files><br>...<br><br>but on params.File there are=
n't any "type" parameter only "type_" so you can change xml with:<br><br>=
...<br> <files><br> =
<file><br> &n=
bsp; <name>/tmp/testFile1.txt&l=
t;/name><br> &nbs=
p; <content>temp content</content><br>  =
; <type_>PLAINTEXT</type_>=
;<br> </file><br>&n=
bsp; </files><br>...<br><br>- insert =
an example of using cloud-init via python-sdk (I hope java-sdk haven't bi=
g differences).<br><br>for this you can insert on web page an example of =
setting via cloud-init: hostname, reset root password and write a simple =
text file, and finally simple python code is:<br><br>...<br>scontent =3D =
"write_files:\n- content: |\n &n=
bsp; #simple file\n PIPPO=
=3D\"ciao\"\n path: /etc/pippo.txt"<br>action =3D param=
s.Action(<br> vm=3Dparams.VM(<br> initialization=3D=
params.Initialization(<br> cloud_init=3Dpar=
ams.CloudInit(<br> host=3Dparams.Host(address=3D"rhelte=
st029"),<br> users=3Dparams.Users(<br> &nbs=
p; user=3D[params.User(user_name=3D"root", password=3D"pippolo")]<b=
r> ),<br> files=3Dparams.Files=
(<br> file=3D[params.File(name=3D"/etc/pippo.txt=
", content=3Dscontent, type_=3D"PLAINTEXT")]<br> =
)<br> )<br> )<br> =
)<br>)<br>vm.start( action )<br>...<br><br>HTH<br>Amedeo Salvati</div>=0A=
</div>=0A
--_=__=_XaM3_.1406285961.2A.813187.42.17781.52.42.007.867559545--
10 years, 4 months
[ANN] oVirt 3.5.0 Second Beta is now available for testing
by Sandro Bonazzola
The oVirt team is pleased to announce that the 3.5.0 Second Beta is now
available for testing as of Jul 21th 2014.
The beta is available now for Fedora 19, Fedora 20 and Red Hat Enterprise Linux 6.5
(or similar).
Feel free to join us testing it on Tue Jul 29th second test day!
This release of oVirt includes numerous bug fixes.
See the release notes [1] for a list of the new features and bugs fixed.
The existing repository ovirt-3.5-pre has been updated for delivering this
release without the need of enabling any other repository.
Please refer to release notes [1] for Installation / Upgrade instructions.
New oVirt Live, oVirt Guest Tools and oVirt Node ISO will be available soon as well[2].
Please note that mirrors may need a couple of days before being synchronized.
If you want to be sure to use latest rpms and don't want to wait for the mirrors,
you can edit /etc/yum.repos.d/ovirt-3.5.repo commenting the mirror line and
removing the comment on baseurl line.
[1] http://www.ovirt.org/OVirt_3.5_Release_Notes
[2] http://resources.ovirt.org/pub/ovirt-3.5-pre/iso/
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 4 months
Weird yum depencency problem upgtading host 3.4.3
by Daniel Helgenberger
--=-xBZU0tRm0TJDUQp8tADG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
Hello,
I am in the process updating my 2 node HE cluster to oVirt 3.4.3.
The engine and node one is already updaded without any problems (great
work on fixing the ha-agend subsystem locked issue!).
Now, yum updating node two - immediately after node one - :
Error: Package:
ovirt-hosted-engine-ha-1.2.1-0.2.master.20140723134021.el6.noarch
(ovirt-3.4-stable)
Requires: vdsm-python >=3D 4.16.0
Removing: vdsm-python-4.14.9-0.el6.x86_64 (@ovirt-3.4-stable)
vdsm-python =3D 4.14.9-0.el6
Updated By: vdsm-python-4.14.11.2-0.el6.x86_64
(ovirt-3.4-stable)
vdsm-python =3D 4.14.11.2-0.el6
Available: vdsm-python-4.12.1-2.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.12.1-2.el6
Available: vdsm-python-4.12.1-4.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.12.1-4.el6
Available: vdsm-python-4.13.0-9.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.13.0-9.el6
Available: vdsm-python-4.13.0-11.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.13.0-11.el6
Available: vdsm-python-4.13.2-1.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.13.2-1.el6
Available: vdsm-python-4.13.3-2.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.13.3-2.el6
Available: vdsm-python-4.13.3-3.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.13.3-3.el6
Available: vdsm-python-4.13.3-4.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.13.3-4.el6
Available: vdsm-python-4.13.4-0.el6.i686 (ovirt-3.3-stable)
vdsm-python =3D 4.13.4-0.el6
Available: vdsm-python-4.14.6-0.el6.i686 (ovirt-3.4-stable)
vdsm-python =3D 4.14.6-0.el6
Available: vdsm-python-4.14.8.1-0.el6.i686 (ovirt-3.4-stable)
vdsm-python =3D 4.14.8.1-0.el6
Available: vdsm-python-4.14.11-0.el6.i686 (ovirt-3.4-stable)
vdsm-python =3D 4.14.11-0.el6
Error: Package:
ovirt-hosted-engine-ha-1.2.1-0.2.master.20140723134021.el6.noarch
(ovirt-3.4-stable)
Requires: vdsm-cli >=3D 4.16.0
Removing: vdsm-cli-4.14.9-0.el6.noarch (@ovirt-3.4-stable)
vdsm-cli =3D 4.14.9-0.el6
Updated By: vdsm-cli-4.14.11.2-0.el6.noarch
(ovirt-3.4-stable)
vdsm-cli =3D 4.14.11.2-0.el6
Available: vdsm-cli-4.12.0-1.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.12.0-1.el6
Available: vdsm-cli-4.12.1-2.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.12.1-2.el6
Available: vdsm-cli-4.12.1-4.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.12.1-4.el6
Available: vdsm-cli-4.13.0-9.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.13.0-9.el6
Available: vdsm-cli-4.13.0-11.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.13.0-11.el6
Available: vdsm-cli-4.13.2-1.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.13.2-1.el6
Available: vdsm-cli-4.13.3-2.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.13.3-2.el6
Available: vdsm-cli-4.13.3-3.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.13.3-3.el6
Available: vdsm-cli-4.13.3-4.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.13.3-4.el6
Available: vdsm-cli-4.13.4-0.el6.noarch (ovirt-3.3-stable)
vdsm-cli =3D 4.13.4-0.el6
Available: vdsm-cli-4.14.6-0.el6.noarch (ovirt-3.4-stable)
vdsm-cli =3D 4.14.6-0.el6
Available: vdsm-cli-4.14.8.1-0.el6.noarch (ovirt-3.4-stable)
vdsm-cli =3D 4.14.8.1-0.el6
Available: vdsm-cli-4.14.11-0.el6.noarch (ovirt-3.4-stable)
vdsm-cli =3D 4.14.11-0.el6
Error: Package:
ovirt-hosted-engine-ha-1.2.1-0.2.master.20140723134021.el6.noarch
(ovirt-3.4-stable)
Requires: vdsm >=3D 4.16.0
Removing: vdsm-4.14.9-0.el6.x86_64 (@ovirt-3.4-stable)
vdsm =3D 4.14.9-0.el6
Updated By: vdsm-4.14.11.2-0.el6.x86_64 (ovirt-3.4-stable)
vdsm =3D 4.14.11.2-0.el6
Available: vdsm-4.12.1-2.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.12.1-2.el6
Available: vdsm-4.12.1-4.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.12.1-4.el6
Available: vdsm-4.13.0-9.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.13.0-9.el6
Available: vdsm-4.13.0-11.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.13.0-11.el6
Available: vdsm-4.13.2-1.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.13.2-1.el6
Available: vdsm-4.13.3-2.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.13.3-2.el6
Available: vdsm-4.13.3-3.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.13.3-3.el6
Available: vdsm-4.13.3-4.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.13.3-4.el6
Available: vdsm-4.13.4-0.el6.i686 (ovirt-3.3-stable)
vdsm =3D 4.13.4-0.el6
Available: vdsm-4.14.6-0.el6.i686 (ovirt-3.4-stable)
vdsm =3D 4.14.6-0.el6
Available: vdsm-4.14.8.1-0.el6.i686 (ovirt-3.4-stable)
vdsm =3D 4.14.8.1-0.el6
Available: vdsm-4.14.11-0.el6.i686 (ovirt-3.4-stable)
vdsm =3D 4.14.11-0.el6
You could try using --skip-broken to work around the problem
You could try running: rpm -Va --nofiles --nodigest
You have new mail in /var/spool/mail/root
The other host upgraded to vdsm 4.14.11 ....
Thanks,
Daniel
--=20
Daniel Helgenberger=20
m box bewegtbild GmbH=20
P: +49/30/2408781-22
F: +49/30/2408781-10
ACKERSTR. 19=20
D-10115 BERLIN=20
www.m-box.de www.monkeymen.tv=20
Gesch=C3=A4ftsf=C3=BChrer: Martin Retschitzegger / Michaela G=C3=B6llner
Handeslregister: Amtsgericht Charlottenburg / HRB 112767=20
--=-xBZU0tRm0TJDUQp8tADG
Content-Type: application/x-pkcs7-signature; name="smime.p7s"
Content-Disposition: attachment; filename="smime.p7s"
Content-Transfer-Encoding: base64
MIAGCSqGSIb3DQEHAqCAMIACAQExCzAJBgUrDgMCGgUAMIAGCSqGSIb3DQEHAQAAoIIN9zCCBFcw
ggM/oAMCAQICCwQAAAAAAS9O4TFGMA0GCSqGSIb3DQEBBQUAMFcxCzAJBgNVBAYTAkJFMRkwFwYD
VQQKExBHbG9iYWxTaWduIG52LXNhMRAwDgYDVQQLEwdSb290IENBMRswGQYDVQQDExJHbG9iYWxT
aWduIFJvb3QgQ0EwHhcNMTEwNDEzMTAwMDAwWhcNMTkwNDEzMTAwMDAwWjBUMQswCQYDVQQGEwJC
RTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQZXJzb25h
bFNpZ24gMiBDQSAtIEcyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAwWtB+TXs+BJ9
3SJRaV+3uRNGJ3cUO+MTgW8+5HQXfgy19CzkDI1T1NwwICi/bo4R/mYR5FEWx91//eE0ElC/89iY
7GkL0tDasmVx4TOXnrqrsziUcxEPPqHRE8x4NhtBK7+8o0nsMIJMA1gyZ2FA5To2Ew1BBuvovvDJ
+Nua3qOCNBNu+8A+eNpJlVnlu/qB7+XWaPXtUMlsIikxD+gREFVUgYE4VzBuLa2kkg0VLd09XkE2
ceRDm6YgRATuDk6ogUyX4OLxCGIJF8yi6Z37M0wemDA6Uff0EuqdwDQd5HwG/rernUjt1grLdAxq
8BwywRRg0eFHmE+ShhpyO3Fi+wIDAQABo4IBJTCCASEwDgYDVR0PAQH/BAQDAgEGMBIGA1UdEwEB
/wQIMAYBAf8CAQAwHQYDVR0OBBYEFD8V0m18L+cxnkMKBqiUbCw7xe5lMEcGA1UdIARAMD4wPAYE
VR0gADA0MDIGCCsGAQUFBwIBFiZodHRwczovL3d3dy5nbG9iYWxzaWduLmNvbS9yZXBvc2l0b3J5
LzAzBgNVHR8ELDAqMCigJqAkhiJodHRwOi8vY3JsLmdsb2JhbHNpZ24ubmV0L3Jvb3QuY3JsMD0G
CCsGAQUFBwEBBDEwLzAtBggrBgEFBQcwAYYhaHR0cDovL29jc3AuZ2xvYmFsc2lnbi5jb20vcm9v
dHIxMB8GA1UdIwQYMBaAFGB7ZhpFDZfKiVAvfQTNNKj//P1LMA0GCSqGSIb3DQEBBQUAA4IBAQDI
WOF8oQHpI41wO21cUvjE819juuGa05F5yK/ESqW+9th9vfhG92eaBSLViTIJV7gfCFbt11WexfK/
44NeiJMfi5wX6sK7Xnt8QIK5lH7ZX1Wg/zK1cXjrgRaYUOX/MA+PmuRm4gWV0zFwYOK2uv4OFgaM
mVr+8en7K1aQY2ecI9YhEaDWOcSGj6SN8DvzPdE4G4tBk4/aIsUged9sGDqRYweKla3LTNjXPps1
Y+zsVbgHLtjdOIB0YZ1hrlAQcY2L/b+V+Yyoi7CMdOtmm1Rm6Jh5ILbwQTjlUCkgu5yVdfs9LDKc
M0SPeCldkjfaGVSd+nURMOUy3hfxsMVux9+FMIIEyjCCA7KgAwIBAgIRAJZpZsDepakv5CafojXo
PKcwDQYJKoZIhvcNAQEFBQAwVDELMAkGA1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYt
c2ExKjAoBgNVBAMTIUdsb2JhbFNpZ24gUGVyc29uYWxTaWduIDIgQ0EgLSBHMjAeFw0xMzA4Mjcx
NjU3NThaFw0xNjA4MjcxNjU3NThaMFgxCzAJBgNVBAYTAkRFMRwwGgYDVQQDExNEYW5pZWwgSGVs
Z2VuYmVyZ2VyMSswKQYJKoZIhvcNAQkBFhxkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlMIIB
IjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAzgFDm8+SeTU4Yt3WopJQgqZAuuNxyMlxiPuq
0C0D581goXz2nVVjhTCIVwX2MqWYD1Dyjy1hLHXothgWgZaiQ1EB4oVdmIFmIfIjR6SkR/Gjw3lx
MwJzEpxJhZXyyrOYE8Kgw2maJWgLx5zw2/lKpcffhVW0OY0t+JWWxPKiYFcAmQnb+fleonM8sUZZ
ZES08uRVVL67jbq+3+E2xCLlqQ2iJ1h5ej3wlyuZ4CkUnfMHYrG8zOIfHwsPirWACX026a1flgts
Kl1Yv0CRZ1c5qujcP3OPpDovIbBr9RBStl2DcFdzTuGMdmfp32963VLOlvKpClPMzrfJeJfWZ4Qy
UwIDAQABo4IBkTCCAY0wDgYDVR0PAQH/BAQDAgWgMEwGA1UdIARFMEMwQQYJKwYBBAGgMgEoMDQw
MgYIKwYBBQUHAgEWJmh0dHBzOi8vd3d3Lmdsb2JhbHNpZ24uY29tL3JlcG9zaXRvcnkvMCcGA1Ud
EQQgMB6BHGRhbmllbC5oZWxnZW5iZXJnZXJAbS1ib3guZGUwCQYDVR0TBAIwADAdBgNVHSUEFjAU
BggrBgEFBQcDAgYIKwYBBQUHAwQwQwYDVR0fBDwwOjA4oDagNIYyaHR0cDovL2NybC5nbG9iYWxz
aWduLmNvbS9ncy9nc3BlcnNvbmFsc2lnbjJnMi5jcmwwVQYIKwYBBQUHAQEESTBHMEUGCCsGAQUF
BzAChjlodHRwOi8vc2VjdXJlLmdsb2JhbHNpZ24uY29tL2NhY2VydC9nc3BlcnNvbmFsc2lnbjJn
Mi5jcnQwHQYDVR0OBBYEFLw0UD+6l35aKnDaePxEP8K35HYZMB8GA1UdIwQYMBaAFD8V0m18L+cx
nkMKBqiUbCw7xe5lMA0GCSqGSIb3DQEBBQUAA4IBAQBdVOm7h+E4sRMBbTN1tCIjAEgxmB5U0mdZ
XcawzEHLJxTrc/5YFBMGX2qPju8cuZV14XszMfRBJdlJz1Od+voJggianIhnFEAakCxaa1l/cmJ5
EDT6PgZAkXbMB5rU1dhegb35lJJkcFLEpR2tF1V0TfbSe5UZNPYeMQjYsRhs69pfKLoeGm4dSLK7
gsPT5EhPd+JPyNSIootOwClMP4CTxIsXQgRI5IDqG2Ku/r2YMMLsqWD11PtAE87t2mgohQ6V1XdW
FqGd1V+wN98oPumRRS8bld+1gRA7GVYMnO5MF6p//iHFcy3MVT05ojqgomMt+voH5cFzrHA61z80
xaZ6MIIEyjCCA7KgAwIBAgIRAJZpZsDepakv5CafojXoPKcwDQYJKoZIhvcNAQEFBQAwVDELMAkG
A1UEBhMCQkUxGTAXBgNVBAoTEEdsb2JhbFNpZ24gbnYtc2ExKjAoBgNVBAMTIUdsb2JhbFNpZ24g
UGVyc29uYWxTaWduIDIgQ0EgLSBHMjAeFw0xMzA4MjcxNjU3NThaFw0xNjA4MjcxNjU3NThaMFgx
CzAJBgNVBAYTAkRFMRwwGgYDVQQDExNEYW5pZWwgSGVsZ2VuYmVyZ2VyMSswKQYJKoZIhvcNAQkB
FhxkYW5pZWwuaGVsZ2VuYmVyZ2VyQG0tYm94LmRlMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB
CgKCAQEAzgFDm8+SeTU4Yt3WopJQgqZAuuNxyMlxiPuq0C0D581goXz2nVVjhTCIVwX2MqWYD1Dy
jy1hLHXothgWgZaiQ1EB4oVdmIFmIfIjR6SkR/Gjw3lxMwJzEpxJhZXyyrOYE8Kgw2maJWgLx5zw
2/lKpcffhVW0OY0t+JWWxPKiYFcAmQnb+fleonM8sUZZZES08uRVVL67jbq+3+E2xCLlqQ2iJ1h5
ej3wlyuZ4CkUnfMHYrG8zOIfHwsPirWACX026a1flgtsKl1Yv0CRZ1c5qujcP3OPpDovIbBr9RBS
tl2DcFdzTuGMdmfp32963VLOlvKpClPMzrfJeJfWZ4QyUwIDAQABo4IBkTCCAY0wDgYDVR0PAQH/
BAQDAgWgMEwGA1UdIARFMEMwQQYJKwYBBAGgMgEoMDQwMgYIKwYBBQUHAgEWJmh0dHBzOi8vd3d3
Lmdsb2JhbHNpZ24uY29tL3JlcG9zaXRvcnkvMCcGA1UdEQQgMB6BHGRhbmllbC5oZWxnZW5iZXJn
ZXJAbS1ib3guZGUwCQYDVR0TBAIwADAdBgNVHSUEFjAUBggrBgEFBQcDAgYIKwYBBQUHAwQwQwYD
VR0fBDwwOjA4oDagNIYyaHR0cDovL2NybC5nbG9iYWxzaWduLmNvbS9ncy9nc3BlcnNvbmFsc2ln
bjJnMi5jcmwwVQYIKwYBBQUHAQEESTBHMEUGCCsGAQUFBzAChjlodHRwOi8vc2VjdXJlLmdsb2Jh
bHNpZ24uY29tL2NhY2VydC9nc3BlcnNvbmFsc2lnbjJnMi5jcnQwHQYDVR0OBBYEFLw0UD+6l35a
KnDaePxEP8K35HYZMB8GA1UdIwQYMBaAFD8V0m18L+cxnkMKBqiUbCw7xe5lMA0GCSqGSIb3DQEB
BQUAA4IBAQBdVOm7h+E4sRMBbTN1tCIjAEgxmB5U0mdZXcawzEHLJxTrc/5YFBMGX2qPju8cuZV1
4XszMfRBJdlJz1Od+voJggianIhnFEAakCxaa1l/cmJ5EDT6PgZAkXbMB5rU1dhegb35lJJkcFLE
pR2tF1V0TfbSe5UZNPYeMQjYsRhs69pfKLoeGm4dSLK7gsPT5EhPd+JPyNSIootOwClMP4CTxIsX
QgRI5IDqG2Ku/r2YMMLsqWD11PtAE87t2mgohQ6V1XdWFqGd1V+wN98oPumRRS8bld+1gRA7GVYM
nO5MF6p//iHFcy3MVT05ojqgomMt+voH5cFzrHA61z80xaZ6MYIC5zCCAuMCAQEwaTBUMQswCQYD
VQQGEwJCRTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMhR2xvYmFsU2lnbiBQ
ZXJzb25hbFNpZ24gMiBDQSAtIEcyAhEAlmlmwN6lqS/kJp+iNeg8pzAJBgUrDgMCGgUAoIIBUzAY
BgkqhkiG9w0BCQMxCwYJKoZIhvcNAQcBMBwGCSqGSIb3DQEJBTEPFw0xNDA3MjQxNjIzMDlaMCMG
CSqGSIb3DQEJBDEWBBT6i7l0yHWCZaStlhmMrBhZ97a/jjB4BgkrBgEEAYI3EAQxazBpMFQxCzAJ
BgNVBAYTAkJFMRkwFwYDVQQKExBHbG9iYWxTaWduIG52LXNhMSowKAYDVQQDEyFHbG9iYWxTaWdu
IFBlcnNvbmFsU2lnbiAyIENBIC0gRzICEQCWaWbA3qWpL+Qmn6I16DynMHoGCyqGSIb3DQEJEAIL
MWugaTBUMQswCQYDVQQGEwJCRTEZMBcGA1UEChMQR2xvYmFsU2lnbiBudi1zYTEqMCgGA1UEAxMh
R2xvYmFsU2lnbiBQZXJzb25hbFNpZ24gMiBDQSAtIEcyAhEAlmlmwN6lqS/kJp+iNeg8pzANBgkq
hkiG9w0BAQEFAASCAQBKkY1xImSPMK4whqROcGG8+WvrwqFNZsvkY8/eUNug5RuRupVvxNvP4hBU
5GWS1zxbSLIqCh57rhW82WNwIUENEWkC7kZ9dCL7VENtc6oLob92qM+KtZ7fJgKU92apVDNipQgD
aoFVxg7ph45CGIMbf1ZEB6BXLMzBscrEl37K1VktW818Pd3xLxme99Zwu/Lnt6V5F4RQYg12ju2Y
E7lWYizficVWzKKKKt4yYdd3ij5JeIE+VV6iuwgp3n3g2196rLBbMt8Vnz4o+IR1sEg4xXBUum33
2IiVnZpoJaxLm8QQMvPHnie7qeTmWD5umqrIFyEMkbbqZHhVGLy20EQGAAAAAAAA
--=-xBZU0tRm0TJDUQp8tADG--
10 years, 4 months
Migration of CentOS 6.5 VMs causes kernel panic after upgrading to oVirt 3.4.3
by s k
--_1dcf94f6-d452-406c-89e4-63c2a6074ed7_
Content-Type: text/plain; charset="iso-8859-7"
Content-Transfer-Encoding: quoted-printable
Hello=2C
I upgraded oVirt yesterday from 3.4.2 to 3.4.3 and performed a yum upgrade =
on all ovirt nodes as well without rebooting any of them.
After completing the upgrade=2C I tried to perform manual migration of VMs =
to a different host. The result was that all Windows 2003/2008 and CentOS =
5 VMs were migrated successfully without any issues but all CentOS 6.5 VMs =
crashed although migration was completed successfully.
By looking the console of the CentOS 6.5 VMs I noticed that a few seconds a=
fter migration was complete=2C all the processes were killed=2C many out of=
memory errors were thrown as well as messages like "virtio_balloon virtio3=
: Out of puff! Cant' get 256 pages" and the VM ended up in kernel panic so =
I had to perform a manual poweroff and start the VM again.
Any ideas of what might be the cause? I have opened an urgent Bug for this =
(https://bugzilla.redhat.com/show_bug.cgi?id=3D1123274)
Thank you=2C
Sokratis
=
--_1dcf94f6-d452-406c-89e4-63c2a6074ed7_
Content-Type: text/html; charset="iso-8859-7"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'>Hello=2C<div><br></div><div><br>=
</div><div>I upgraded oVirt yesterday from 3.4.2 to 3.4.3 and performed a y=
um upgrade on all ovirt nodes as well without rebooting any of them.</div><=
div><br></div><div><br></div><div>After completing the upgrade=2C I tried t=
o perform manual migration of VMs to a different host. The result was that =
all Windows 2003/2008  =3Band CentOS 5 VMs were migrated successfully w=
ithout any issues but all CentOS 6.5 VMs crashed although migration was com=
pleted successfully.</div><div><br></div><div><br></div><div>By looking the=
console of the CentOS 6.5 VMs I noticed that a few seconds after migration=
was complete=2C all the processes were killed=2C many out of memory errors=
were thrown as well as messages like "virtio_balloon virtio3: Out of puff!=
Cant' get 256 pages" and the VM ended up in kernel panic so I had to perfo=
rm a manual poweroff and start the VM again.</div><div><br></div><div><br><=
/div><div>Any ideas of what might be the cause? I have opened an urgent Bug=
for this (<a href=3D"https://bugzilla.redhat.com/show_bug.cgi?id=3D1123274=
" target=3D"_blank" style=3D"font-size: 12pt=3B">https://bugzilla.redhat.co=
m/show_bug.cgi?id=3D1123274</a>)</div><div><br></div><div><br></div><div>Th=
ank you=2C</div><div><br></div><div><br></div><div>Sokratis</div><div><br><=
/div><div><br></div><div><br></div><div><br></div><div><br></div> =
</div></body>
</html>=
--_1dcf94f6-d452-406c-89e4-63c2a6074ed7_--
10 years, 4 months
Call for Papers Reminder: KVM Forum
by Brian Proffitt
This is a reminder to everyone that the CFP for KVM Forum is coming up very soon. All submissions must be received[1] before midnight (PDT) on July 27, 2014.
This event will be an excellent opportunity to show off the new work being done around oVirt, and will also be hosting of a new oVirt Workshop.
Get your submittals in as soon as you can!
[1] http://events.linuxfoundation.org/events/kvm-forum/program/cfp
BKP
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 4 months
Disk migration eats all CPU, vms running in SPM become unresponsive
by Federico Alberto Sayd
Hello:
I am experiencing some troubles with ovirt nodes:
When a node is selected as SPM and I move a disk between storage
domains, it seems that migration process eats all CPU and some VMs
(running on the SPM) hang, others lose network connectivity. The events
tab at Ovirt Engine reports the CPU exceeding the defined threshold and
then reports that VMs in such host (SPM) are not responding.
How can I debug this? Why do the VMs become unresponsive or lost network
connectivity when the host CPU goes too high?
I have attached a screenshot of the ovirt-engine events, and the
relevant engine.log
My setup:
oVirt Engine Version:
3.4.0-1.el6 (Centos 6.5)
Nodes:
Centos 6.5
vdsm-4.14.6-0.el6
libvirt-0.10.2-29.el6_5.9
KVM: 0.12.1.2 - 2.415.el6_5.10 (jenkins build)
Regards
Federico
10 years, 4 months
planning ovirt for production
by Demeter Tibor
------=_Part_27854792_967363234.1405525563715
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Hi,
We have a production environment with KVM+centos6 and we want to switch to ovirt.
At this moment we have 12 VM on three independent server.
This VMs uses the local disks of servers, we don't have a central storage.
currently we have for ovirt
- Two Dell R710 with 128 gigs of ram.
- A third dell server for ovirt-engine.
- four 1 gb/sec NICs/ server.
- Smart GB switch
we would like make an ovirt environment with
- clusterized, redundant filesystem, data loss protection
- If a host goes to down the VMs could made a restart on the remain host
- LACP/bonding (mode 6) for fast I/O beetwen gluster hosts, we don't have 10Gbe nics
- 8 TB of disk capacity for VMs
we don't want:
- using hw raid on servers, because we need free disk tray for more capacity
My questions.
- Which glusterfs method is the best for us for performance?
- Can I make a real "performance" disk i/o by 4-4 NICs ? Or I need 10 Gbe nic for this?
- How much disk need for good redundancy/performance? 4/server or 2/server ?
- What will the weak point our project?
Thanks in advance.
Tibor
------=_Part_27854792_967363234.1405525563715
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<html><body><div style=3D"font-family: times new roman, new york, times, se=
rif; font-size: 12pt; color: #000000"><div>Hi,</div><div><br></div><div>We =
have a production environment with KVM+centos6 and we want to switch to ovi=
rt.</div><div>At this moment we have 12 VM on three independent server.&nbs=
p;</div><div>This VMs uses the local disks of servers, we don't have a cent=
ral storage.</div><div><br></div><div>currently we have for ovirt</div><div=
><br></div><div>- Two Dell R710 with 128 gigs of ram.</div><div><span style=
=3D"font-size: 12pt;">- A third dell server for ovirt-engine.</span></div><=
div><span style=3D"font-size: 12pt;">- four 1 gb/sec NICs/ server.</span></=
div><div><span style=3D"font-size: 12pt;">- Smart GB switch</span></div><di=
v><br></div><div>we would like make an ovirt environment with</div><div><br=
></div><div>- clusterized, redundant filesystem, data loss protection</div>=
<div>- If a host goes to down the VMs could made a restart on the remain ho=
st</div><div>- LACP/bonding (mode 6) for fast I/O beetwen gluster hosts, we=
don't have 10Gbe nics</div><div>- 8 TB of disk capacity for VMs</div><div>=
<br></div><div>we don't want:</div><div><br></div><div>- using hw raid on s=
ervers, because we need free disk tray for more capacity</div><div><br></di=
v><div>My questions.</div><div><br></div><div>- Which glusterfs method is t=
he best for us for performance?</div><div>- Can I make a real "performance"=
disk i/o by 4-4 NICs ? Or I need 10 Gbe nic for this?</div><di=
v>- How much disk need for good redundancy/performance? 4/server or 2/serve=
r ? </div><div>- What will the weak point our project?</div><div><br><=
/div><div>Thanks in advance.</div><div><br></div><div>Tibor</div><div><br><=
/div><div><br></div><div><br></div><div><br></div><div><br></div></div></bo=
dy></html>
------=_Part_27854792_967363234.1405525563715--
10 years, 4 months