Say Hello to the oVirt Engine Virtual Appliance
by Fabian Deutsch
Hey,
one of the things on the list for oVirt 3.5 was the oVirt Virtual Appliance.
Huh, what’s that? You might ask. Well, imagine a cloud image with oVirt Engine 3.5
and it’s dependencies pre-installed, and a sane default answer file for
ovirt-engine-setup. All of this delivered in an OVA file. The intention is to get
you a running oVirt Engine without much hassle.
Furthermore this appliance can be used in conjunction with - and is actually
intended for - the Self Hosted Engine feature, and the upcoming oVirt
Node Hosted Engine plugin.
More informations and links about the appliance and how to build it can be found here:
http://dummdida.tumblr.com/post/88944206100/say-hello-to-the-ovirt-engine...
Testing it with hosted engine is the next step.
Greetings
fabian
10 years, 5 months
Call for Papers Deadline in One Week
by Brian Proffitt
Conference: Open World Forum 2014
Information: This year's program will show you how to take back control of your digital world, including IT/IS and (personal) data, whether you are a professional or not. Stop losing control and discover how Free and Open Source software may help you be more and more independent, whether technologically, legally or financially.
Date: October 30-November 1, 2014
Location: Paris, France
Website: http://openworldforum.org/
Call for Papers Deadline: June 22, 2014
Call for Papers URL: http://openworldforum.org/en/cfp/
--
Brian Proffitt
oVirt Community Manager
Project Atomic Community Lead
Open Source and Standards, Red Hat - http://community.redhat.com
Phone: +1 574 383 9BKP
IRC: bkp @ OFTC
10 years, 5 months
Small note on workarounds in UI code
by Vojtech Szocs
Hi guys,
recently, we've merged a small patch [1] that adds some info on notable
(in this case, GWT-Platform framework related) workarounds via comments
that look like this:
// TODO-GWT: description goes here
If you find yourself working around some GWT or GWT-Platform issue,
please add a comment like above with some brief description, i.e. link
to GWT(P) issue tracker and maybe also short summary of the workaround.
In future, when upgrading to newer GWT/P versions, this should aid us
in reviewing existing code and reducing the risk of potential issues
due to the upgrade. (Ideally, workarounds shouldn't be needed assuming
relevant issues are fixed in newer versions; in the worst case, those
workarounds might pass compilation but will actually not work or even
break stuff.)
[1] http://gerrit.ovirt.org/#/c/24995/
Thanks,
Vojtech
10 years, 5 months
virtio-scsi
by Marcus White
Hello
I had some basic questions, was asking them on the kvm list but it
probably makes more sense here:)
The questions are about virtio-scsi. I would appreciate any help.
1. Is virtio-scsi via QEMU a zero copy operation from guest to disk?
The vhost-LIO as I understand it is zero copy, wasnt sure about QEMU.
With the VM and the QEMU thread part of the same process, it would be
good to know the flow and where the copies occur, if they do.
2. When the block layer underneath QEMU(assuming the QEMU does talk to
the block device and not sg, for example), if an error code comes in,
is it converted to a SCSI response before being passed to the guest?
3. Are there any other performance tweaks that would help in both the
QEMU case and the LIO case?
Thanks in advance:)
MW
10 years, 5 months
Re: [ovirt-devel] [ovirt-users] Recommended setup for a FC based storage domain
by Sven Kieske
CC'ing the devel list, maybe some VDSM and storage people can
explain this?
Am 10.06.2014 12:24, schrieb combuster:
> /etc/libvirt/libvirtd.conf and /etc/vdsm/logger.conf
>
> , but unfortunately maybe I've jumped to conclusions, last weekend, that
> very same thin provisioned vm was running a simple export for 3hrs
> before I've killed the process. But I wondered:
>
> 1. The process that runs behind the export is qemu-img convert (from raw
> to raw), and running iotop shows that every three or four seconds it
> reads 10-13 MBps and then idles for a few seconds. Run the numbers on
> 100GB (why is he covering the entire 100 of 15GB used on thin volume I
> still don't get it) and you get precisely 3-4 hrs estimated time remaining.
> 2. When I run export with SPM on a node that doesn't have any vm's
> running, export finishes for aprox. 30min (iotop shows 40-70MBps read
> speed constantly)
> 3. Renicing I/O priority of the qemu-img process as well as the CPU
> priority gave no results, it was still runing slow beyond any explanation.
>
> Debug logs showed nothing of interest, so I disabled anything above
> warning and it suddenly accelerated the export, so I've connected the
> wrong dots.
>
> On 06/10/2014 11:18 AM, Andrew Lau wrote:
>> Interesting, which files did you modify to lower the log levels?
>>
>> On Tue, Jun 3, 2014 at 12:38 AM, <combuster(a)archlinux.us> wrote:
>>> One word of caution so far, when exporting any vm, the node that acts
>>> as SPM
>>> is stressed out to the max. I releived the stress by a certain margin
>>> with
>>> lowering libvirtd and vdsm log levels to WARNING. That shortened out the
>>> export procedure by at least five times. But vdsm process on the SPM
>>> node is
>>> still with high cpu usage so it's best that the SPM node should be
>>> left with a
>>> decent CPU time amount to spare. Also, export of VM's with high vdisk
>>> capacity
>>> and thin provisioning enabled (let's say 14GB used of 100GB defined)
>>> took
>>> around 50min over a 10Gb ethernet interface to a 1Gb export NAS
>>> device that
>>> was not stressed out at all by other processes. When I did that
>>> export with
>>> debug log levels it took 5hrs :(
>>>
>>> So lowering log levels is a must in production enviroment. I've
>>> deleted the
>>> lun that I exported on the storage (removed it first from ovirt) and
>>> for the
>>> next weekend I am planing to add a new one, export it again on all
>>> the nodes
>>> and start a few fresh vm installations. Things I'm going to look for are
>>> partition alignment and running them from different nodes in the
>>> cluster at
>>> the same time. I just hope that not all I/O is going to pass through
>>> the SPM,
>>> this is the one thing that bothers me the most.
>>>
>>> I'll report back on these results next week, but if anyone has
>>> experience with
>>> this kind of things or can point to some documentation would be great.
>>>
>>> On Monday, 2. June 2014. 18.51.52 you wrote:
>>>> I'm curious to hear what other comments arise, as we're analyzing a
>>>> production setup shortly.
>>>>
>>>> On Sun, Jun 1, 2014 at 10:11 PM, <combuster(a)archlinux.us> wrote:
>>>>> I need to scratch gluster off because setup is based on CentOS 6.5, so
>>>>> essential prerequisites like qemu 1.3 and libvirt 1.0.1 are not met.
>>>> Gluster would still work with EL6, afaik it just won't use libgfapi and
>>>> instead use just a standard mount.
>>>>
>>>>> Any info regarding FC storage domain would be appreciated though.
>>>>>
>>>>> Thanks
>>>>>
>>>>> Ivan
>>>>>
>>>>> On Sunday, 1. June 2014. 11.44.33 combuster(a)archlinux.us wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I have a 4 node cluster setup and my storage options right now are
>>>>>> a FC
>>>>>> based storage, one partition per node on a local drive (~200GB
>>>>>> each) and
>>>>>> a
>>>>>> NFS based NAS device. I want to setup export and ISO domain on the
>>>>>> NAS
>>>>>> and
>>>>>> there are no issues or questions regarding those two. I wasn't
>>>>>> aware of
>>>>>> any
>>>>>> other options at the time for utilizing a local storage (since
>>>>>> this is a
>>>>>> shared based datacenter) so I exported a directory from each
>>>>>> partition
>>>>>> via
>>>>>> NFS and it works. But I am little in the dark with the following:
>>>>>>
>>>>>> 1. Are there any advantages for switching from NFS based local
>>>>>> storage to
>>>>>> a
>>>>>> Gluster based domain with blocks for each partition. I guess it
>>>>>> can be
>>>>>> only
>>>>>> performance wise but maybe I'm wrong. If there are advantages, are
>>>>>> there
>>>>>> any tips regarding xfs mount options etc ?
>>>>>>
>>>>>> 2. I've created a volume on the FC based storage and exported it
>>>>>> to all
>>>>>> of
>>>>>> the nodes in the cluster on the storage itself. I've configured
>>>>>> multipathing correctly and added an alias for the wwid of the LUN
>>>>>> so I
>>>>>> can
>>>>>> distinct this one and any other future volumes more easily. At
>>>>>> first I
>>>>>> created a partition on it but since oVirt saw only the whole LUN
>>>>>> as raw
>>>>>> device I erased it before adding it as the FC master storage
>>>>>> domain. I've
>>>>>> imported a few VM's and point them to the FC storage domain. This
>>>>>> setup
>>>>>> works, but:
>>>>>>
>>>>>> - All of the nodes see a device with the alias for the wwid of the
>>>>>> volume,
>>>>>> but only the node wich is currently the SPM for the cluster can see
>>>>>> logical
>>>>>> volumes inside. Also when I setup the high availability for VM's
>>>>>> residing
>>>>>> on the FC storage and select to start on any node on the cluster,
>>>>>> they
>>>>>> always start on the SPM. Can multiple nodes run different VM's on the
>>>>>> same
>>>>>> FC storage at the same time (logical thing would be that they can,
>>>>>> but I
>>>>>> wanted to be sure first). I am not familiar with the logic oVirt
>>>>>> utilizes
>>>>>> that locks the vm's logical volume to prevent corruption.
>>>>>>
>>>>>> - Fdisk shows that logical volumes on the LUN of the FC volume are
>>>>>> missaligned (partition doesn't end on cylindar boundary), so I
>>>>>> wonder if
>>>>>> this is becuase I imported the VM's with disks that were created
>>>>>> on local
>>>>>> storage before and that any _new_ VM's with disks on the fc
>>>>>> storage would
>>>>>> be propperly aligned.
>>>>>>
>>>>>> This is a new setup with oVirt 3.4 (did an export of all the VM's
>>>>>> on 3.3
>>>>>> and after a fresh installation of the 3.4 imported them back
>>>>>> again). I
>>>>>> have room to experiment a little with 2 of the 4 nodes because
>>>>>> currently
>>>>>> they are free from running any VM's, but I have limited room for
>>>>>> anything else that would cause an unplanned downtime for four virtual
>>>>>> machines running on the other two nodes on the cluster (currently
>>>>>> highly
>>>>>> available and their drives are on the FC storage domain). All in
>>>>>> all I
>>>>>> have 12 VM's running and I'm asking on the list for advice and
>>>>>> guidance
>>>>>> before I make any changes.
>>>>>>
>>>>>> Just trying to find as much info regarding all of this as possible
>>>>>> before
>>>>>> acting upon.
>>>>>>
>>>>>> Thank you in advance,
>>>>>>
>>>>>> Ivan
--
Mit freundlichen Grüßen / Regards
Sven Kieske
Systemadministrator
Mittwald CM Service GmbH & Co. KG
Königsberger Straße 6
32339 Espelkamp
T: +49-5772-293-100
F: +49-5772-293-333
https://www.mittwald.de
Geschäftsführer: Robert Meyer
St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen
Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen
10 years, 5 months
Re: [ovirt-devel] [ovirt-users] Get involved in oVirt integration! June edition
by Sandro Bonazzola
Il 12/06/2014 17:26, René Koch ha scritto:
> Hi Sandro,
>
> Sadly I don't have time to work on one of these bugs, but I have a general question on how contributing to oVirt works. [2] explains in detail how to
> push code to gerrit.
>
> As I wanted to start contributing to oVirt I started with the easiest way by extending the list of operating systems. So I created a bug for this
> (1101219), pushed to code to gerrit (referenced to this bug id) and Jenkins build job was successful. So I did everything mentioned in [2] - but what
> now?
>
> Shall I just wait and see what happens?
> Should I post to ovirt-devel that someone will review the code?
Good questions. Posting to ovirt-devel will help in having people to review your patches.
I did "git blame" on the file you changed in http://gerrit.ovirt.org/28113 and added those who already changed that file for reviewing the patch.
> Who decides which oVirt version will include this code or if this code will be included or needs adaptions (I only know that a certain score in gerrit
> is required and some developers are allowed to rate a commit)?
>
> I'm aware that my commit isn't fancy at all and there's no need for any priority, but just want to know what's the right workflow in case I have more
> time to work on some bugs...
>
> I guess these questions could be interesting for users (basically for users who are not familiar with a professional development setup/workflow
> including gerrit like me) who want to work on the bugs in june edition as well...
I agree. You did everything right since now:
- opened a bz
- pushed a patch referencing it
- added the gerrit reference to bugzilla
what you just miss is adding other people for reviewing the patch.
Oved Ourfali already set whiteboard to virt, so this is a virt team patch and virt team maintainers will merge your patch once it's ready.
Itamar Heim also reviewd the bug you pushed and set target release to 3.5.0.
Now you just need to wait for other developers to review the patch and maintainer to merge it.
If there will be comments on your patch a new version of the patch may be needed (just amend the patch and push again with the same Change-ID)
and it will be reviewed again.
you can find more info on gerrit here: https://gerrit-review.googlesource.com/Documentation
>
>
> Thanks,
> René
>
>
> On 06/12/2014 04:11 PM, Sandro Bonazzola wrote:
>> Hi,
>> have you got some free time and do you want to get involved in oVirt integration?
>> After the success of first edition we're now proposing this again.
>> Here are a couple of bugs you can hopefully fix in less that one day or you can just try to reproduce providing info:
>>
>> Bug 1091651 - Misleading error message when first host cannot be reached during hosted-engine deployment
>> Bug 1080823 - [RFE] make override of iptables configurable when using hosted-engine
>> Bug 1065350 - hosted-engine should prompt a question at the user when the host was already a host in the engine
>> Bug 1097635 - ovirt-hosted-engine-setup fails with Modules libvirt contains invalid configuration
>>
>> Is this the first time you try to contribute to oVirt project?
>> You can start from here [1][2]!
>>
>> [1] http://www.ovirt.org/Develop
>> [2] http://www.ovirt.org/Working_with_oVirt_Gerrit
>>
>>
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 5 months
Get involved in oVirt integration! June edition
by Sandro Bonazzola
Hi,
have you got some free time and do you want to get involved in oVirt integration?
After the success of first edition we're now proposing this again.
Here are a couple of bugs you can hopefully fix in less that one day or you can just try to reproduce providing info:
Bug 1091651 - Misleading error message when first host cannot be reached during hosted-engine deployment
Bug 1080823 - [RFE] make override of iptables configurable when using hosted-engine
Bug 1065350 - hosted-engine should prompt a question at the user when the host was already a host in the engine
Bug 1097635 - ovirt-hosted-engine-setup fails with Modules libvirt contains invalid configuration
Is this the first time you try to contribute to oVirt project?
You can start from here [1][2]!
[1] http://www.ovirt.org/Develop
[2] http://www.ovirt.org/Working_with_oVirt_Gerrit
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
10 years, 5 months