Re: [Users] What do you want to see in oVirt next?

On 01/03/2013 11:26 PM, Alexandru Vladulescu wrote:
I would like to add a request for the new upper coming version 3.2 if possible:
Hi Alexandru, just to note my question was for the post 3.2 version, as 3.2 is basically done.
Although some of you use spice instead of VNC, as I am an Ubuntu user on my desktop and laptop, spice protocol is not working withing my OS; even if I tried to build it from source, search unofficial deb packs or convert if from rpm packages. I know spice is strongly supported on the Fedora community, but I, myself on the server side work on RH Enterprise and Centos, but as for desktop use, I have a very hard time make the spice plugin for firefox to work.
Therefore for my solution I had setup a VNC reflector + some shell automation to make it work between 2 different subnets (one inside and one outside) -- this, somehow, adding to the initial scope.
This would have been much more easier to have a VNC proxy inside the ovirt engine function, from where to make the necessary setups and assignments of the console to each VM, or even though I might sound funny to make a solution as vrde on Vbox, because works damn great and it's easy to setup or change.
Last question, If I might ask, when is the 3.2 planned to be released (aprox)
last update was here: http://lists.ovirt.org/pipermail/users/2013-January/011454.html Thanks, Itamar

I set up an ovirt 3.1 all-in-one config with Fedora and added one node. Created virtual machines and so forth. All seems to work well. I then decided to add iscsi and can verify that both the ovirt node and the all-in-one node can 'see' the storage even though it is uninitialized and does not have a partition table: # parted /dev/mapper/1IET_00010001 p Error: /dev/mapper/1IET_00010001: unrecognised disk label Model: Linux device-mapper (multipath) (dm) Disk /dev/mapper/1IET_00010001: 52.4GB Sector size (logical/physical): 512B/512B Partition Table: unknown Disk Flags:
From the OS on both the all-in-one host and the node, I can do 'normal' iscsi things and storage things without error.
I created a new iscsi domain off the Default datacenter and it comes up although it says it is unattached. If I attempt to attach it I get: There are No Data Centers to which the Storage Domain can be attached. I'm confused. What am I missing out of my config to get iscsi added after the fact? Or is this something you can only do at installation time? It seems like this is one of those many things that need to be done in a certain order to get it to work. Thanks, Rick

On 01/12/2013 06:01 PM, Rick Beldin wrote:
I set up an ovirt 3.1 all-in-one config with Fedora and added one node. Created virtual machines and so forth. All seems to work well.
I then decided to add iscsi and can verify that both the ovirt node and the all-in-one node can 'see' the storage even though it is uninitialized and does not have a partition table:
# parted /dev/mapper/1IET_00010001 p Error: /dev/mapper/1IET_00010001: unrecognised disk label Model: Linux device-mapper (multipath) (dm) Disk /dev/mapper/1IET_00010001: 52.4GB Sector size (logical/physical): 512B/512B Partition Table: unknown Disk Flags:
From the OS on both the all-in-one host and the node, I can do 'normal' iscsi things and storage things without error.
I created a new iscsi domain off the Default datacenter and it comes up although it says it is unattached. If I attempt to attach it I get:
There are No Data Centers to which the Storage Domain can be attached.
I'm confused. What am I missing out of my config to get iscsi added after the fact? Or is this something you can only do at installation time?
It seems like this is one of those many things that need to be done in a certain order to get it to work.
Thanks,
Rick _______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
you still can't mix different types of storage (local/iscsi) in same DC, so you need to add another DC for the hosts which will use the iscsi storage.

you still can't mix different types of storage (local/iscsi) in same DC, so you need to add another DC for the hosts which will use the iscsi storage.
i can understand this for a cluster but not for a dc - What is the reason to stop having different cluster types within a DC ? thanks

On 01/13/2013 01:25 AM, Tom Brown wrote:
you still can't mix different types of storage (local/iscsi) in same DC, so you need to add another DC for the hosts which will use the iscsi storage.
i can understand this for a cluster but not for a dc - What is the reason to stop having different cluster types within a DC ?
no real reason for either dc or cluster, just legacy

On 01/13/2013 02:09 AM, Itamar Heim wrote:
On 01/13/2013 01:25 AM, Tom Brown wrote:
you still can't mix different types of storage (local/iscsi) in same DC, so you need to add another DC for the hosts which will use the iscsi storage.
Thanks for pointing this out. I missed a major conceptual point here. Does this mean you might as well not have any local storage on a node beyond that to support the hypervisor? How do you move the virtual disks already deployed on one type of storage to another? While not common, I could expect this activity to occur as organizations grow their environment. Thanks, Rick

On 01/14/2013 03:22 PM, Rick Beldin wrote:
On 01/13/2013 02:09 AM, Itamar Heim wrote:
On 01/13/2013 01:25 AM, Tom Brown wrote:
you still can't mix different types of storage (local/iscsi) in same DC, so you need to add another DC for the hosts which will use the iscsi storage.
Thanks for pointing this out. I missed a major conceptual point here.
Does this mean you might as well not have any local storage on a node beyond that to support the hypervisor?
true. until we support mixed storage. (well, unless you use local storage on all hosts via glusterfs)
How do you move the virtual disks already deployed on one type of storage to another? While not common, I could expect this activity to occur as organizations grow their environment.
export/import (via an NFS export storage domain)

On 01/14/2013 05:19 PM, Tom Brown wrote:
true. until we support mixed storage.
is that an item on the roadmap?
yes. usually dubbed "SDM"[1], but has several milestones needed before we get there. [1] granularity of storage domain, rather than SPM - pool level granualrity.

true. until we support mixed storage.
is that an item on the roadmap?
yes. usually dubbed "SDM"[1], but has several milestones needed before we get there. [1] granularity of storage domain, rather than SPM - pool level granualrity.
Great to hear - any clues on a general non committed to rough date?

On 01/14/2013 05:41 PM, Tom Brown wrote:
true. until we support mixed storage.
is that an item on the roadmap?
yes. usually dubbed "SDM"[1], but has several milestones needed before we get there. [1] granularity of storage domain, rather than SPM - pool level granualrity.
Great to hear - any clues on a general non committed to rough date?
not yet.
participants (3)
-
Itamar Heim
-
Rick Beldin
-
Tom Brown