Chinese oVirt community
by Yedidyah Bar David
Hi all,
Recently, more and more questions are posted to users(a)ovirt.org in
what seems like a result of a google translation of Chinese to
English. These are usually hard to understand, and I expect the
replies are also hard to understand. A recent example is [1].
Two years ago there was a discussion [2] about this on
users(a)ovirt.org. Now, when searching, I find:
1. There seems to have been a domain ovirt-china.org , but it's dead
now (no info about it in dns or whois). The only version of it I can
find on archive.org is [3].
2. There is a github org [4].
3. There is a google group [5] , but it requires registration to view
(and I didn't try to register).
If there is an active Chinese community of oVirt users, perhaps we
should give it more publicity, to make it easier to find. Otherwise,
perhaps we can help by creating a list on lists.ovirt.org , or
something like that.
Best regards,
[1] http://lists.ovirt.org/pipermail/users/2016-October/043732.html
[2]
http://lists.ovirt.org/pipermail/users/2014-May/024585.html
http://lists.ovirt.org/pipermail/users/2014-May/024862.html
http://lists.ovirt.org/pipermail/users/2014-May/024668.html
[3] http://web.archive.org/web/20141203033820/http://www.ovirt-china.org/
[4] https://github.com/ovirt-china
[5] https://groups.google.com/forum/#!forum/ovirt-china
--
Didi
8 years
ovirt homeserver
by david caughey
Hi,
I'm building a homeserver to run ovirt and wanted to get opinions on the
best approach.
The server will be used as a test/studybed for
ovirt/kvm/vcloud/openstack/ceph.
The server will be based around a Xeon E5 10 core with 128GB ram.
Option 1:
Build server with CentOS 7.2 and deploy ovirt directly on top.
Option 2:
Build server with CentOS 7.2 and deploy multiple ovirt instances on top of
KVM.
Which will be the most stable versatile method?
If a GPU is used as a passthrough device can it be used on several vm's or
is it restricted to 1 vm?
If 2 GPU's are used can 1 be used as a dedicated passthrough to 1 vm and
the other shared between the remaining vm's?
Is CentOS/RH the best platform for ovirt?
Is it okay/advisable to load the latest kernel, (4.8 ish), on to CentOS
before installing ovirt?
Any and all comments/advice welcome,
David
8 years
100% disk utilization on Hosted Engine
by Anantha Raghava
This is a multi-part message in MIME format.
--------------10436485D205AA7FD0A15C25
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Hi,
I have hit a unique problem. Using oVirt 4.0.2.7-1.el7.centos since last
one month and I have hit a unique problem. The Engine is reporting that
Disk space allocated to Hosted engine is full and is not allowing to
login to engine's console to clear some files as well. When I try to add
additional storage to Hosted engine, engine admin portal reports that as
Engine VM is not managed by oVirt Engine, it cannot extend the storage.
Now the questions are:
1. Is it that enabling advanced DWH on the engine host is the root cause
for 100GB of disk getting filled in about 1 month?
2. How do we clear the disk of some unwanted files and extend the space?
3. How do we reset the DWH database to basic?
Note: CentOS 7.2 is not even allowing us to login to console to do any
action. Booting the Hosted Engine VM with Live CD etc., looks like is
ruled out in this case.
--
Thanks & Regards,
Anantha Raghava
eXza Technology Consulting & Services
Do not print this e-mail unless required. Save Paper & trees.
--------------10436485D205AA7FD0A15C25
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=utf-8">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p><font face="Liberation Serif">Hi,</font></p>
<p><font face="Liberation Serif">I have hit a unique problem. Using
oVirt 4.0.2.7-1.el7.centos since last one month and I have hit a
unique problem. The Engine is reporting that Disk space
allocated to Hosted engine is full and is not allowing to login
to engine's console to clear some files as well. When I try to
add additional storage to Hosted engine, engine admin portal
reports that as Engine VM is not managed by oVirt Engine, it
cannot extend the storage.<br>
</font></p>
<p><font face="Liberation Serif">Now the questions are:</font></p>
<p><font face="Liberation Serif">1. Is it that enabling advanced DWH
on the engine host is the root cause for 100GB of disk getting
filled in about 1 month?</font></p>
<p><font face="Liberation Serif">2. How do we clear the disk of some
unwanted files and extend the space?</font></p>
<p><font face="Liberation Serif">3. How do we reset the DWH database
to basic?</font><br>
</p>
<br>
<div class="moz-signature">Note: CentOS 7.2 is not even allowing us
to login to console to do any action. Booting the Hosted Engine VM
with Live CD etc., looks like is ruled out in this case.<br>
--
<p style="margin-bottom: 0cm; line-height: 100%"><font face="Times
New Roman, serif">Thanks
& Regards,</font></p>
<p style="margin-bottom: 0cm; line-height: 100%"><br>
</p>
<address style="line-height: 100%"><font face="Times New Roman,
serif">Anantha
Raghava</font></address>
<address style="line-height: 100%"><font face="Times New Roman,
serif">eXza
Technology Consulting & Services</font></address>
<br>
<p style="margin-bottom: 0cm; line-height: 100%"><font
color="#66cc00"><font face="Times New Roman, serif">Do
not print this e-mail unless required. Save Paper &
trees.</font></font></p>
</div>
</body>
</html>
--------------10436485D205AA7FD0A15C25--
8 years
when engine broken
by 张 余歌
--_000_BN6PR11MB192119B2D8B3ADAAD60080CD90AE0BN6PR11MB1921namp_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
cmVjZW50bHkgaSBjb25mdXNlZCBhYm91dCBob3cgaSBjYW4gcmVjb3ZlcnkgZW5naW5lIG9uY2Ug
ZW5naW5lIHByb2Nlc3MgZmFpbGVkDQpvciBicm9rZW4gZm9yIHNvbWUgcmVhc29uLg0KDQplbnZp
cm9tZW50OmFsbGlub25lIGluc3RhbGwgb3ZpcnQuMy41LngNCmVuZ2luZSBpbnN0YWxsZWQgb24g
bG9jYWwgaG9zdCB3aGl0aG91dCBhbnkgcHJvdGVjdGlvbi5pIGNhbiBkbyBzb21lIGVuZ2luZSBi
YWNrdXAgYnkgdXNpbmcgY29tbWFuZC4NCg0KSSByZWZlciB0byBzb21lIHdheSB0byByZWNvdmVy
eSBlbmdpbmUgYWJvdXQgZGlzYXRlciByZWNvdmVyeSAuYnV0IGl0IHNlZW0gbWF5YmUgbXkgc3Rh
Z2UgaXMgd3Jvbmcgb3Igb3RoZXIgaXNzdWVzLHdob3VsZCB1IGdpdmUgbWUgc29tZSB3YXkgdG8g
cmVjb3ZlciBlbmdpbmUsYW5kIGZpbmQgbXkgbG9zdCB2be+8n++8nw0KdGhhbmtzIGEgbG90DQoN
CuiOt+WPliBPdXRsb29rIGZvciBBbmRyb2lkPGh0dHBzOi8vYWthLm1zL2doZWkzNj4NCg==
--_000_BN6PR11MB192119B2D8B3ADAAD60080CD90AE0BN6PR11MB1921namp_
Content-Type: text/html; charset="utf-8"
Content-ID: <AA3668EA6AB50146B41B60F0E1B4CE4A(a)sct-15-1-659-11-msonline-outlook-d08c7.templateTenant>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPHAgZGlyPSJhdXRv
IiBzdHlsZT0iIHRleHQtYWxpZ246IGxlZnQ7IG1hcmdpbi10b3A6IDI1cHg7IG1hcmdpbi1ib3R0
b206IDI1cHg7IGZvbnQtZmFtaWx5OiBzYW5zLXNlcmlmOyBmb250LXNpemU6IDExcHQ7IGNvbG9y
OiBibGFjazsgYmFja2dyb3VuZC1jb2xvcjogd2hpdGUgIj4NCnJlY2VudGx5IGkgY29uZnVzZWQg
YWJvdXQgaG93IGkgY2FuIHJlY292ZXJ5IGVuZ2luZSBvbmNlIGVuZ2luZSBwcm9jZXNzIGZhaWxl
ZDxicj4NCm9yIGJyb2tlbiBmb3Igc29tZSByZWFzb24uPC9wPg0KPHAgZGlyPSJhdXRvIiBzdHls
ZT0iIHRleHQtYWxpZ246IGxlZnQ7IG1hcmdpbi10b3A6IDI1cHg7IG1hcmdpbi1ib3R0b206IDI1
cHg7IGZvbnQtZmFtaWx5OiBzYW5zLXNlcmlmOyBmb250LXNpemU6IDExcHQ7IGNvbG9yOiBibGFj
azsgYmFja2dyb3VuZC1jb2xvcjogd2hpdGUgIj4NCmVudmlyb21lbnQ6YWxsaW5vbmUgaW5zdGFs
bCBvdmlydC4zLjUueDxicj4NCmVuZ2luZSBpbnN0YWxsZWQgb24gbG9jYWwgaG9zdCB3aGl0aG91
dCBhbnkgcHJvdGVjdGlvbi5pIGNhbiBkbyBzb21lIGVuZ2luZSBiYWNrdXAgYnkgdXNpbmcgY29t
bWFuZC48YnI+DQo8L3A+DQo8cCBkaXI9ImF1dG8iIHN0eWxlPSIgdGV4dC1hbGlnbjogbGVmdDsg
bWFyZ2luLXRvcDogMjVweDsgbWFyZ2luLWJvdHRvbTogMjVweDsgZm9udC1mYW1pbHk6IHNhbnMt
c2VyaWY7IGZvbnQtc2l6ZTogMTFwdDsgY29sb3I6IGJsYWNrOyBiYWNrZ3JvdW5kLWNvbG9yOiB3
aGl0ZSAiPg0KSSByZWZlciB0byBzb21lIHdheSB0byByZWNvdmVyeSBlbmdpbmUgYWJvdXQgZGlz
YXRlciByZWNvdmVyeSAuYnV0IGl0IHNlZW0gbWF5YmUgbXkgc3RhZ2UgaXMgd3Jvbmcgb3Igb3Ro
ZXIgaXNzdWVzLHdob3VsZCB1IGdpdmUgbWUgc29tZSB3YXkgdG8gcmVjb3ZlciBlbmdpbmUsYW5k
IGZpbmQgbXkgbG9zdCB2be+8n++8nzxicj4NCnRoYW5rcyBhIGxvdDwvcD4NCjxwIGRpcj0iYXV0
byIgc3R5bGU9IiB0ZXh0LWFsaWduOiBsZWZ0OyBtYXJnaW4tdG9wOiAyNXB4OyBtYXJnaW4tYm90
dG9tOiAyNXB4OyBmb250LWZhbWlseTogc2Fucy1zZXJpZjsgZm9udC1zaXplOiAxMXB0OyBjb2xv
cjogYmxhY2s7IGJhY2tncm91bmQtY29sb3I6IHdoaXRlICI+DQrojrflj5YgPGEgaHJlZj0iaHR0
cHM6Ly9ha2EubXMvZ2hlaTM2Ij5PdXRsb29rIGZvciBBbmRyb2lkPC9hPjxicj4NCjwvcD4NCjwv
Ym9keT4NCjwvaHRtbD4NCg==
--_000_BN6PR11MB192119B2D8B3ADAAD60080CD90AE0BN6PR11MB1921namp_--
8 years
Fwd: Re: Local and Shared storage in same datacenter
by Mike
Hi Barak,
Op 30-10-2016 om 11:19 schreef Barak Korren:
>> While I understand that this makes having local storage impossible, I
>> believe there is a use case to have local storage in a shared storage
>> datacenter.
>> Consider the following:
>> I have a few applications that require 1 milli second latency and at most 2
>> milli second.
>> That is not consistenly achievable with shared storage, to that end I added
>> flash storage to a few hypervisors.
>> About 5% of my servers require this and are not that resource hungry to
>> require a dedicated physical server.
>> That same 5% also has no requirement to be migrated if a host fails.
>>
>
> I mentioned this in another thread already, please look into the VDSM
> scratchpad hook. It will allow you to attach (files on) the local SSDs
> (as disks) to VMs when those VMs start up. It will perfectly meet your
> use case if you only need to keep the data on the SSD while the VM is
> up.
>
>
The data should not be lost if we power down the VM.
If the data is lost during a host failure we have options to restore the
data, but do not want to do it on a weekly/monthly basis.
The scratchpad is useful for other options we are exploring, thanks!
8 years
Persist files node-ng
by Jonas Israelsson
How do I persist files on a ng-node ?
I forgot to add some needed network-configuration during install. While
these manually added config files seem to survive a reboot, updating the
node via the web-ui will wipe them.
TIA
Jonas
8 years
gluster setup in ovirt
by Thing
Hi,
I have 3 machines imported into ovirt 4.0.4 just to do storage. I have no
storage setup. I am a bit confused, can I add new storage from scratch via
ovirt? ie picking mount points (I have /gv1 so set on each) or do I create
the gluster replicated setup manually on each of the three nodes first? and
import "ready made"?
8 years
messed up gluster attempt
by Thing
Hi,
So was was trying to make a 3 way mirror and it reported failed. Now I get
these messages,
On glusterp1,
=========
[root@glusterp1 ~]# gluster peer status
Number of Peers: 1
Hostname: 192.168.1.32
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Peer in Cluster (Connected)
[root@glusterp1 ~]# gluster peer probe glusterp3.graywitch.co.nz
peer probe: failed: glusterp3.graywitch.co.nz is either already part of
another cluster or having volumes configured
[root@glusterp1 ~]# gluster volume info
No volumes present
[root@glusterp1 ~]#
=========
on glusterp2,
=========
[root@glusterp2 ~]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
vendor preset: disabled)
Active: active (running) since Fri 2016-10-28 15:22:34 NZDT; 5min ago
Main PID: 16779 (glusterd)
CGroup: /system.slice/glusterd.service
└─16779 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
INFO
Oct 28 15:22:32 glusterp2.graywitch.co.nz systemd[1]: Starting GlusterFS, a
clustered file-system server...
Oct 28 15:22:34 glusterp2.graywitch.co.nz systemd[1]: Started GlusterFS, a
clustered file-system server.
[root@glusterp2 ~]# gluster volume info
No volumes present
[root@glusterp2 ~]# gluster peer status
Number of Peers: 2
Hostname: 192.168.1.33
Uuid: 0fde5a5b-6254-4931-b704-40a88d4e89ce
State: Sent and Received peer request (Connected)
Hostname: 192.168.1.31
Uuid: a29a93ee-e03a-46b0-a168-4d5e224d5f02
State: Peer in Cluster (Connected)
[root@glusterp2 ~]#
==========
on glusterp3,
==========
[root@glusterp3 glusterd]# systemctl status glusterd.service
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled;
vendor preset: disabled)
Active: active (running) since Fri 2016-10-28 15:26:40 NZDT; 1min 16s ago
Main PID: 7033 (glusterd)
CGroup: /system.slice/glusterd.service
└─7033 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level
INFO
Oct 28 15:26:37 glusterp3.graywitch.co.nz systemd[1]: Starting GlusterFS, a
clustered file-system server...
Oct 28 15:26:40 glusterp3.graywitch.co.nz systemd[1]: Started GlusterFS, a
clustered file-system server.
[root@glusterp3 glusterd]# gluster volume info
No volumes present
[root@glusterp3 glusterd]# gluster peer probe glusterp1.graywitch.co.nz
peer probe: failed: glusterp1.graywitch.co.nz is either already part of
another cluster or having volumes configured
[root@glusterp3 glusterd]# gluster volume info
No volumes present
[root@glusterp3 glusterd]# gluster peer status
Number of Peers: 1
Hostname: glusterp2.graywitch.co.nz
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Sent and Received peer request (Connected)
[root@glusterp3 glusterd]#
===========
How do I clean this mess up?
thanks
8 years
gluster how to setup a volume across 3 nodes via ovirt
by Thing
Hi,
I have 3 gluster nodes running,
======
[root@glusterp1 ~]# gluster peer status
Number of Peers: 2
Hostname: 192.168.1.33
Uuid: 0fde5a5b-6254-4931-b704-40a88d4e89ce
State: Peer in Cluster (Connected)
Hostname: 192.168.1.32
Uuid: ef780f56-267f-4a6d-8412-4f1bb31fd3ac
State: Peer in Cluster (Connected)
======
I have a 900gb partition on each of the three nodes ready to go, formatted
xfs. However when I go into host---> storage devices it says gv_1-lvgv1
is already in use and "create brick" is greyed out.
So how do I get "create brick" un-greyed?
The partition isnt mounted, just setup and xfs'd ready for use.
Or am I better to set it up via the CLI on glusterp1? I assume I can then
import it into ovirt for use?
8 years