[Users] NFS data domain use host + local storage question

Jason Keltz jas at cse.yorku.ca
Thu Jul 25 13:55:02 UTC 2013


On 07/25/2013 09:27 AM, René Koch (ovido) wrote:
> On Thu, 2013-07-25 at 09:07 -0400, Jason Keltz wrote:
>> Hi.
>>
>> I have a few questions about data domains...
>>
>> I'm not sure that I understand why when adding a new NFS data domain
>> what the "Use Host" is for?
>>
>>   From the RHEV documentation -  "All communication to the storage domain
>> is from the selected host and not directly from the Red Hat Enterprise
>> Virtualization Manager. At least one active host must be attached to the
>> chosen Data Center before the storage is configured. "
>>
>> .. but I'm puzzled..  don't all the nodes mount the NFS storage directly
>> from the NFS storage server?
>> Is this saying that if I have two nodes, v1 and v2, and I say "Use Host"
>> v1 then v2 gets at storage through v1?  What if v1 is down?
>> Don't all nodes need a connection to the "logical" storage network?
>
> Hi,
>
> You need a host to initialize the storage.
> The host you have to choose with "Use Host" initially creates the data
> structure,... on the storage.
>
> Afterwards all host in your cluster will mount the storage and write
> data for their vms. There's no one-node bottleneck.
>
Great!  Got it .. thanks..

>> ---
>>
>> On the topic of local storage...
>> Right now, I have one node with 1 disk (until some ordered equipment
>> arrives)...
>> /data/images is /dev/mapper/HostVG-Data
>>
>> I want two of my nodes to store local data.  The majority of VMs will
>> use the NFS datastore, but a few VMs need local storage, and I'd like to
>> split these VMs across two nodes, so two nodes will have their own local
>> storage...
>
> So you will have vm storage on node01, node02 and on your NFS storage,
> right?
>

All the VMs on node01 and node02 would be stored on the NFS datastore.
Most of the VMs would have any required data stored on the NFS datastore 
as well.
A few of the VMs on node01 and node02 would have a requirement for a 
local data store.

>> If I was going to install local data on the node, I wouldn't install it
>> on the OS disk - I'd want another disk, or maybe even a few disks!    If
>> I added another disk to this system, how would I go about making *this*
>> disk "/data/images" instead of the root disk? Do I have to reinstall the
>> node?
> I would recommend to use LVM and add new disks into your logical
> volume...
If I added another disk, would I be able to remove the existing 
datastore through the engine, and create a new one pointing at only the 
new disk?
>> I'm also puzzled by this statement: "A local storage domain can be set
>> up on a host. When you set up host to use local storage, the host
>> automatically gets added to a new data center and cluster that no other
>> hosts can be added to. Multiple host clusters require that all hosts
>> have access to all storage domains, which is not possible with local
>> storage. Virtual machines created in a single host cluster cannot be
>> migrated, fenced or scheduled. "
>>
>> So .. let's say I have two nodes, both of them have some local disk, and
>> use the NFS data store.  I can see why I wouldn't be able to migrate a
>> host from one node to the other IF that has was using local data storage
>> for the specific virtual machine.  On the other hand, if it's a VM that
>> is NOT using local storage, and everything is in the NFS datastore, then
>> does this I can't migrate it because each host would have to be in its
>> own cluster only because it has local storage for *some* of the VMs!?
>
> Each local storage host requires it's own datacenter and you can't mix a
> datacenter with local storage with NFS storage.
sigh.  This seems so rigid!  I understand, for example, why clusters 
must encompass same CPU type.  I do not understand why a host cannot 
connect to both local data storage, and NFS storage.

> What I would do in your case:
> 1. Use CentOS/Fedora hosts instead of oVirt-Node.
> 2. Configure NFS-Server on each Node.
> 3. Have 1 datacenter with 1 cluster and 2 nodes with storage type NFS.
> 4. Add 3 storage data domains (NFS-Share of each host and NFS-Share of
> your main NFS server).
> 5. Bind vms with local NFS server to local host...
I never thought of that... very interesting!   I was really trying not 
to use anything but oVirt node to keep the implementation as simple as 
possible. The only problem here if I understand correctly is that each 
node is still accessing even its local data via NFS, in which case, they 
might as well be storing the data on the NFS server itself!  :)
> Or with GlusterFS:
> 1. Use CentOS/Fedora hosts instead of oVirt-Node.
> 2. Configure replicated GlusterFS volume over your 2 nodes
> 3. Have 1 datacenter with 1 cluster and 2 nodes with storage type NFS
> 4. Add 2 storage data domains (NFS-Share of GlusterFS volume and
> NFS-Share of your main NFS server).
>
> Disadvantage of GlusterFS with NFS: one of your 2 nodes is exporting the
> NFS share and if this node is down your storage domain is down and you
> have to manually fix the mount.
Agreed.
>> Finally - I had previously asked about using MD RAID1 redundancy on the
>> root drive, which isn't available yet on the node.  Are there any
>> options for creating redundant local storage using MD RAID1, or it's the
>> same -- no redundancy on local storage unless you're using a RAID card
>> where the driver for that card has been integrated into the node?
>
> MD-Raid or DRBD,... isn't possible, (yet?).
> You could try GlusterFS 3.4 (replicated volume over your 2 nodes)...
>
> Hope this helps.
Thanks very much for your useful feedback.

Jason.




More information about the Users mailing list