Re: [Users] Gluster storage
by Gianluca Cecchi
On Fri, Nov 29, 2013 at 11:48 AM, Dan Kenigsberg wrote:
> On Fri, Nov 29, 2013 at 04:04:03PM +0530, Vijay Bellur wrote:
>
>>
>> There are two ways in which GlusterFS can be used as a storage domain:
>>
>> a) Use gluster native/fuse access with POSIXFS
>>
>> b) Use the gluster native storage domain to bypass fuse (with
>> libgfapi). We are currently addressing an issue in libvirt
>> (https://bugzilla.redhat.com/show_bug.cgi?id=1017289) to enable
>> snapshot support with libgfapi. Once this is addressed, we will have
>> libgfapi support in the native storage domain.
>
> It won't be as immediate, since there's a required fix on Vdsm's side
> (Bug 1022961 - Running a VM from a gluster domain uses mount instead of
> gluster URI)
>
>> Till then, fuse would
>> be used with native storage domain. You can find more details about
>> native storage domain here:
>>
>> http://www.ovirt.org/Features/GlusterFS_Storage_Domain
rg/mailman/listinfo/users
Hello,
revamping here....
I'm using oVirt 3.3.3 beta1 after upgrade from 3.3.2 on fedora 19
ovirt beta repo
It seems bug referred by Vijay (1017289) is still marked as
"assigned", but actually it is for rhel 6.
Bug referred by Dan (1022961) is marked as "blocked" but I don't see
any particular updates since late november. But it is for rhevm, so I
think it is for rhel 6...
So what is the situation for fedora 19 and oVirt in upcoming 3.3.3?
And upcoming fedora 19/20 and 3.4?
I ask because I see that in my qemu command line generated by oVirt there is:
for virtio (virtio-blk) disk
-drive file=/rhev/data-center/mnt/glusterSD/node01.mydomain:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/a5e4f67b-50b5-4740-9990-39deb8812445/53408cb0-bcd4-40de-bc69-89d59b7b5bc2,if=none,id=drive-virtio-disk0,format=raw,serial=a5e4f67b-50b5-4740-9990-39deb8812445,cache=none,werror=stop,rerror=stop,aio=threads
for virtio-scsi disk
-drive file=/rhev/data-center/mnt/glusterSD/node01.mydomain:gvdata/d0b96d4a-62aa-4e9f-b50e-f7a0cb5be291/images/c1477133-6b06-480d-a233-1dae08daf8b3/c2a82c64-9dee-42bb-acf2-65b8081f2edf,if=none,id=drive-scsi0-0-0-0,format=raw,serial=c1477133-6b06-480d-a233-1dae08daf8b3,cache=none,werror=stop,rerror=stop,aio=threads
So it is referred as mount point and not gluster:// way
Also, the output of "mount" command on hypervisors shows:
node01.mydomain:gvdata on
/rhev/data-center/mnt/glusterSD/node01.mydomain:gvdata type
fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
and it seems indeed fuse mount, so not using libgfapi...
output of
"lsof -Pp pid" where pid is the qemu process shows libgfapi....:
qemu-syst 2057 qemu mem REG 253,0 99440
541417 /usr/lib64/libgfapi.so.0.0.0
(btw: strange version 0.0.0 for a release.. not so reassuring ;-)
# ll /usr/lib64/libgfapi.so.0*
lrwxrwxrwx. 1 root root 17 Jan 7 12:45 /usr/lib64/libgfapi.so.0 ->
libgfapi.so.0.0.0
-rwxr-xr-x. 1 root root 99440 Jan 3 13:35 /usr/lib64/libgfapi.so.0.0.0
)
At page http://www.gluster.org/category/qemu/ there is a schema about
mount types and benchmarks
1) FUSE mount
2) GlusterFS block driver in QEMU (FUSE bypass)
3) Base (VM image accessed directly from brick)
(
qemu-system-x86_64 –enable-kvm –nographic -smp 4 -m 2048 -drive
file=/test/F17,if=virtio,cache=none => /test is brick directory
)
I have not understood if we are in Base (best performance) or FUSE
(worst performance)
thanks in advance for clarifications and possible roadmaps...
Gianluca
10 years, 10 months
[Users] [QE] oVirt 3.3.3 beta / RC status
by Sandro Bonazzola
Hi,
oVirt 3.3.3 beta has been released and is actually on QA.
We'll start composing oVirt 3.3.3 RC on 2014-01-21 12:00 UTC.
The bug tracker [1] shows no bugs blocking the release
The following is a list of the non-blocking bugs still open with target 3.3 - 3.3.3:
Whiteboard Bug ID Summary
integration 902979 ovirt-live - firefox doesn't trust the installed engine
integration 1021805 [RFE] oVirt Live - use motd to show the admin password
integration 1022440 [RFE] AIO - configure the AIO host to be a gluster cluster/host
integration 1026930 Package virtio-win and put it in ovirt repositories
integration 1026933 pre-populate ISO domain with virtio-win ISO
integration 1050084 Tracker: oVirt 3.3.3 release
network 997197 Some AppErrors messages are grammatically incorrect (singular vs plural)
node 906257 USB Flash Drive install of ovirt-node created via dd fails
node 923049 ovirt-node fails to boot from local disk under UEFI mode
node 965583 [RFE] add shortcut key on TUI
node 976675 [wiki] Update contribution page
node 979350 Changes admin password in the first time when log in is failed while finished auto-install
node 979390 [RFE] Split defaults.py into smaller pieces
node 982232 performance page takes >1sec to load (on first load)
node 984441 kdump page crashed before configuring the network after ovirt-node intalled
node 986285 UI crashes when no bond name is given
node 991267 [RFE] Add TUI information to log file.
node 1018374 ovirt-node-iso-3.0.1-1.0.2.vdsm.fc19: Failed on Auto-install
node 1018710 [RFE] Enhance API documentation
node 1032035 [RFE]re-write auto install function for the cim plugin
node 1033286 ovirt-node-plugin-vdsm can not be added to ovirt node el6 base image
storage 987917 [oVirt] [glance] API version not specified in provider dialog
virt 1007940 Cannot clone from snapshot while using GlusterFS as POSIX Storage Domain
Maintainers:
- We'll start composing oVirt 3.3.3 RC on 2014-01-21 12:00 UTC: all non blockers bug still open after the build will be moved to 3.3.4.
- Please add the bugs to the tracker if you think that 3.3.3 should not be released without them fixed.
- Please provide ETA on bugs you add as blockers
- Please re-target all bugs you don't think that should block 3.3.3.
- Please fill release notes, the page has been created here [3]
- Please remember to rebuild your packages before 2014-01-21 12:00 UTC if you want them to be included in 3.3.3 RC.
For those who want to help testing the bugs, I suggest to add yourself as QA contact for the bug and add yourself to the testing page [2].
Thanks to Gianluca Cecchi for his testing of oVirt 3.3.3 beta on Gluster storage!
[1] http://bugzilla.redhat.com/1050084
[2] http://www.ovirt.org/Testing/Ovirt_3.3.3_testing
[3] http://www.ovirt.org/OVirt_3.3.3_release_notes
Thanks,
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 10 months
[Users] [QE] oVirt bugzilla updates
by Sandro Bonazzola
Hi oVirt community,
for all people interested in joining QE effort, we've created a new user in bugzilla as default QA assignee: bugs(a)ovirt.org.
If you want to be updated on QE bugs activity you can just add that user to your bugzilla account watch list:
https://bugzilla.redhat.com/userprefs.cgi?tab=email
Watch user list -> add "bugs(a)ovirt.org"
Any email sent by bugzilla to that user will be also sent to you.
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 10 months
[Users] VM start Options
by Wolfgang Bucher
This is a multi-part message in MIME format. Your mail reader does not
understand MIME message format.
--=_B6gRzuLnmV-Qr05NuONpjAYxCVvEbGdhKKZp2OTxeHP+yFqr
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Hello=0D=0AHow can i change VM start from aio=3Dthread to aio=3Dnative=0D=
=0A=0D=0AGreetings=C2=A0=0D=0AWolfgang=0D=0A
--=_B6gRzuLnmV-Qr05NuONpjAYxCVvEbGdhKKZp2OTxeHP+yFqr
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: quoted-printable
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://ww=
w.w3.org/TR/html4/loose.dtd"><html>=0A<head>=0A <meta name=3D"Generator"=
content=3D"Zarafa WebApp v7.1.7-42779">=0A <meta http-equiv=3D"Content-=
Type" content=3D"text/html; charset=3Dutf-8">=0A <title>VM start Options=
</title>=0A</head>=0A<body>=0A<font face=3D"tahoma" size=3D"2">Hello</fon=
t><div><font face=3D"tahoma" size=3D"2">How can i change VM start from ai=
o=3Dthread to aio=3Dnative</font></div><div><font face=3D"tahoma" size=3D=
"2"><br></font></div><div><font face=3D"tahoma" size=3D"2">Greetings =
;</font></div><div><font face=3D"tahoma" size=3D"2">Wolfgang</font></div>=
=0A</body>=0A</html>
--=_B6gRzuLnmV-Qr05NuONpjAYxCVvEbGdhKKZp2OTxeHP+yFqr--
10 years, 10 months
[Users] Horrid performance during disk I/O
by Blaster
This probably more appropriate for the qemu users mailing list, but that list doesn’t get much traffic and most posts go unanswered…
As I’ve mentioned in the past, I’m migrating my environment from ESXi to oVirt AIO.
Under ESXi I was pretty happy with the disk performance, and noticed very little difference from bare metal to HV.
Under oVirt/QEMU/KVM, not so much….
Running hdparm on the disk from the HV and from the guest yields the same number, about 180MB/sec (SATA III disks, 7200RPM). The problem is, during disk activity, and it doesn’t matter if it’s Windows 7 guests or Fedora 20 (both using virtio-scsi) the qemu-system-x86 process starts consuming 100% of the hypervisor CPU. Hypervisor is a Core i7 950 with 24GB of RAM. There’s 2 Fedora 20 guests and 2 Windows 7 guests. Each configured with 4 GB of guaranteed RAM.
Load averages can go up over 40 during sustained disk IO. Performance obviously suffers greatly.
I have tried all combinations of having the guests on EXT 4, BTRFS and using EXT 4 and BTRFS inside the guests, as well as direct LUN. Doesn’t make any difference. Disk IO sends qemu-system-x86 to high CPU percentages.
This can’t be normal, so I’m wondering what I’ve done wrong. Is there some magic setting I’m missing?
10 years, 10 months
[Users] VirtIO disk latency
by Markus Stockhausen
This is a multi-part message in MIME format.
------=_NextPartTM-000-e5c11ff1-4bb3-4471-9f08-8adebae2667b
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
Hello,=0A=
=0A=
coming from the "low cost NFS storage thread" I will open a new one=0A=
about a topic that might be interesting for others too.=0A=
=0A=
We see a quite a heavy latency penalty using KVM VirtIO disks in comparison=
=0A=
to ESX. Doing one I/O onto disk inside a VM usually adds 370us of overhead =
in=0A=
the virtualisation layer. This has been tested with VirtIO-SCSI and windows=
=0A=
guest (2K3). More here (still now answer yet):=0A=
=0A=
http://lists.nongnu.org/archive/html/qemu-discuss/2013-12/msg00028.html=0A=
=0A=
A comparison for small sequential 1K I/Os on a NFS datastore in our setup g=
ives:=0A=
=0A=
- access NFS inside the hypervisor - 12.000 I/Os per second - or 83us laten=
cy =0A=
- access DISK inside ESX VM that resides on NFS - 8000 I/Os per second - or=
125us latency=0A=
- access DISK inside OVirt VM that resides on NFS - 2200 I/Os per second - =
or 450us latency=0A=
=0A=
Even the official document at http://www.linux-kvm.org/page/Virtio/Block/La=
tency=0A=
suggest that the several mechanisms (iothread/vcpu) at least have a overhea=
d=0A=
of more than 200us=0A=
=0A=
Has anyone experienced something simlar. If these latency are normal it wou=
ld =0A=
make no sense to think about SSDs inside a central storage (be it ISCSI or =
NFS =0A=
or whatever).=0A=
=0A=
Markus=0A=
=0A=
------=_NextPartTM-000-e5c11ff1-4bb3-4471-9f08-8adebae2667b
Content-Type: text/plain;
name="InterScan_Disclaimer.txt"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
filename="InterScan_Disclaimer.txt"
****************************************************************************
Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte
Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und
vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte
Weitergabe dieser Mail ist nicht gestattet.
Über das Internet versandte E-Mails können unter fremden Namen erstellt oder
manipuliert werden. Deshalb ist diese als E-Mail verschickte Nachricht keine
rechtsverbindliche Willenserklärung.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
Vorstand:
Kadir Akin
Dr. Michael Höhnerbach
Vorsitzender des Aufsichtsrates:
Hans Kristian Langva
Registergericht: Amtsgericht Köln
Registernummer: HRB 52 497
This e-mail may contain confidential and/or privileged information. If you
are not the intended recipient (or have received this e-mail in error)
please notify the sender immediately and destroy this e-mail. Any
unauthorized copying, disclosure or distribution of the material in this
e-mail is strictly forbidden.
e-mails sent over the internet may have been written under a wrong name or
been manipulated. That is why this message sent as an e-mail is not a
legally binding declaration of intention.
Collogia
Unternehmensberatung AG
Ubierring 11
D-50678 Köln
executive board:
Kadir Akin
Dr. Michael Höhnerbach
President of the supervisory board:
Hans Kristian Langva
Registry office: district court Cologne
Register number: HRB 52 497
****************************************************************************
------=_NextPartTM-000-e5c11ff1-4bb3-4471-9f08-8adebae2667b--
10 years, 10 months
Re: [Users] Attach NFS storage to multiple datacenter
by Amedeo Salvati
Il 16/01/2014 18:00, users-request(a)ovirt.org ha scritto:
> From: "Tobias Fiebig"<mail(a)wybt.net>
> To: "Elad Ben Aharon"<ebenahar(a)redhat.com>
> Cc:"Users(a)ovirt.org List" <Users(a)ovirt.org>
> Sent: Thursday, January 16, 2014 6:37:24 PM
> Subject: Re: [Users] Attach NFS storage to multiple datacenter
>
> Heho,
> On Thu, 2014-01-16 at 11:10 -0500, Elad Ben Aharon wrote:
>> >You don't have to attach/detach the domain for every VM export. Sorry for not being clear. You can export all your VMs to the export domain at once and then detach the export domain and attach it to the second DC.
> May have miscommunicated that...
> I can not do this because this kind of means that the vms will be down
> during the whole movement process. I'd like to reduce the per-vm
> downtime as much as possible.
>
> With best Regards,
> Tobias
you can decrease the downtime by:
- live snapshot VM1;
- clone VM1 from snapshot to VM1-NEW (during this process VM1 is
up&running);
- export VM1-NEW to export domain (VM1 up&running);
- detach export domain from old DC;
- attach export domain to new DC;
- import VM1-NEW (VM1 still up&running);
- start VM1-NEW on single mode and configure;
- stop apps from VM1 (downtime);
- optionally sync app data from VM1 to VM1-NEW (downtime);
- start apps on VM1-NEW (apps up&running on new DC).
- stop VM1 on old DC.
HTH
a
--
Amedeo Salvati
RHC{DS,E,VA} - LPIC-3 - UCP - NCLA 11
email: amedeo(a)oscert.net
email: amedeo(a)linux.com
http://plugcomputing.it/redhatcert.php
http://plugcomputing.it/lpicert.php
10 years, 10 months
[Users] Management server
by Koen Vanoppen
Dear All,
Our setup at work now is:
1 management server and 7 hyper visors. All the systems have centos 6.4 on
it.
Now my question is: Because of the fact that all the systems have centos on
it, we can't use the reports portal and the dwh-portal.
Would it be possible to add a second management server to our already
existing setup (fedora 19), without !*breaking*! our existing setup, so
this one can provide us with the reports and dwh-portal and can also serve
as a "Back-Up" management server. If so, how do we handle this?
Kind regards,
Koen
10 years, 10 months
[Users] ovirt-engine-reports rpm for CentOS and oVirt 3.3
by Nicolas Ecarnot
Good morning or whenever,
I read what RPM I have to install here
http://www.ovirt.org/Ovirt_Reports but I can not find any RPM for CentOS 6.
I strongly searched around the web and in the different repo folders, as
well as in the mailing list msgs, and it seems the production of RPM for
CentOS has been disabled for whatever reason.
Is there a private recipe to create them or get them an unofficial way
(reports, dwh, jasperreports...)
Thank you.
--
Nicolas Ecarnot
10 years, 10 months