[Users] How to configure sharedFS ?

Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown. I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option. From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ? The status of this says Done, but I don't see the UI part of this, am I missing something. Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ? thanx, deepak

On Wed, Feb 22, 2012 at 02:32:35PM +0530, Deepak C Shetty wrote:
Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown.
I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option.
From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ?
Currently, the NAS the ovirt supports are local file system, and NFS.
The status of this says Done, but I don't see the UI part of this, am I missing something.
Look closer in http://www.ovirt.org/wiki/Features/PosixFSConnection#Current_Status .. Vdsm has API support for mounting any Posix file system. but this is not yet available from oVirt Engine and GUI.
Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ?
I do not think that anyone has tried this.. be the first!

On 02/22/2012 06:37 PM, Dan Kenigsberg wrote:
On Wed, Feb 22, 2012 at 02:32:35PM +0530, Deepak C Shetty wrote:
Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown.
I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option.
From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ? Currently, the NAS the ovirt supports are local file system, and NFS. The status of this says Done, but I don't see the UI part of this, am I missing something. Look closer in http://www.ovirt.org/wiki/Features/PosixFSConnection#Current_Status ..
Vdsm has API support for mounting any Posix file system. but this is not yet available from oVirt Engine and GUI.
Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ? I do not think that anyone has tried this.. be the first!
Sure :), but how ? since there is no UI support yet. Is there a way I can use the vdsm cli to drive, any ovirt wiki page that helps provide insight on how to use vdsm cli ?

On Thu, Feb 23, 2012 at 10:54:26AM +0530, Deepak C Shetty wrote:
On 02/22/2012 06:37 PM, Dan Kenigsberg wrote:
On Wed, Feb 22, 2012 at 02:32:35PM +0530, Deepak C Shetty wrote:
Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown.
I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option.
From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ? Currently, the NAS the ovirt supports are local file system, and NFS. The status of this says Done, but I don't see the UI part of this, am I missing something. Look closer in http://www.ovirt.org/wiki/Features/PosixFSConnection#Current_Status ..
Vdsm has API support for mounting any Posix file system. but this is not yet available from oVirt Engine and GUI.
Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ? I do not think that anyone has tried this.. be the first!
Sure :), but how ? since there is no UI support yet. Is there a way I can use the vdsm cli to drive, any ovirt wiki page that helps provide insight on how to use vdsm cli ?
I'd start from http://www.ovirt.org/wiki/Vdsm_Standalone and tweak it to use SHAREDFS_DOMAIN instead of LOCALFS_DOMAIN You'd have to do some digging into vdsm/storage/storage_connection.py for the right params to connectStorageServer. Dan.

On 02/23/2012 04:54 PM, Dan Kenigsberg wrote:
On Thu, Feb 23, 2012 at 10:54:26AM +0530, Deepak C Shetty wrote:
On Wed, Feb 22, 2012 at 02:32:35PM +0530, Deepak C Shetty wrote:
Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown.
I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option.
From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ? Currently, the NAS the ovirt supports are local file system, and NFS. The status of this says Done, but I don't see the UI part of this, am I missing something. Look closer in http://www.ovirt.org/wiki/Features/PosixFSConnection#Current_Status ..
Vdsm has API support for mounting any Posix file system. but this is not yet available from oVirt Engine and GUI.
Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ? I do not think that anyone has tried this.. be the first! Sure :), but how ? since there is no UI support yet. Is there a way I can use the vdsm cli to drive, any ovirt wiki page
On 02/22/2012 06:37 PM, Dan Kenigsberg wrote: that helps provide insight on how to use vdsm cli ? I'd start from http://www.ovirt.org/wiki/Vdsm_Standalone and tweak it to use SHAREDFS_DOMAIN instead of LOCALFS_DOMAIN
You'd have to do some digging into vdsm/storage/storage_connection.py for the right params to connectStorageServer.
Great, let me start and get back if i land up into issues. thanks for the pointer.
Dan.

----- Original Message -----
On 02/23/2012 04:54 PM, Dan Kenigsberg wrote:
On Thu, Feb 23, 2012 at 10:54:26AM +0530, Deepak C Shetty wrote:
On Wed, Feb 22, 2012 at 02:32:35PM +0530, Deepak C Shetty wrote:
Hi, From vdsm code i can see support for storage domain of type SHARED_FS But when trying to configure a new storage domain, i don't see SHARED_FS as a available domtype filed in the dropdown.
I want to use shared FS to connect to try and connect to gluster mount point on my node, but unable to as I don't see that option.
From http://www.ovirt.org/wiki/Features/PosixFSConnection - I understand that its the same as sharedFS, so what is PosixFSConnection then, is it just a new interface that provides a wrapper for what is sharedFS now ? Currently, the NAS the ovirt supports are local file system, and NFS. The status of this says Done, but I don't see the UI part of this, am I missing something. Look closer in http://www.ovirt.org/wiki/Features/PosixFSConnection#Current_Status ..
Vdsm has API support for mounting any Posix file system. but this is not yet available from oVirt Engine and GUI.
Also pls let me know if usign sharedFS to make the node use gluster mount is correct or not ? I do not think that anyone has tried this.. be the first! Sure :), but how ? since there is no UI support yet. Is there a way I can use the vdsm cli to drive, any ovirt wiki
On 02/22/2012 06:37 PM, Dan Kenigsberg wrote: page that helps provide insight on how to use vdsm cli ? I'd start from http://www.ovirt.org/wiki/Vdsm_Standalone and tweak it to use SHAREDFS_DOMAIN instead of LOCALFS_DOMAIN
You'd have to do some digging into vdsm/storage/storage_connection.py for the right params to connectStorageServer.
Any help on documenting this so people would not have to dig into the code would be greatly appreciated.
Great, let me start and get back if i land up into issues. thanks for the pointer.
Let us know if you need further assistance..
Dan.
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 02/27/2012 04:55 AM, Ayal Baron wrote:
Any help on documenting this so people would not have to dig into the code would be greatly appreciated.
Great, let me start and get back if i land up into issues. thanks for the pointer. Let us know if you need further assistance..
This is how I plan to attack it... using vdscli, let me know if my steps are correct ? 1) Use createStorageDomain to create a SHAREDFS_DOMAIN of domTypes data & iso 2) Use createStoragePool and associate the above sd's with this pool. 3) How to copy .iso into the newly create iso dom ? engine-iso-uploader won't know abt it, rite ? 4) create a volume to represent my vm disk 5) Use create to create a VM and run it Is this the recommended way ( to use individual vdscli cmds) or the way its done in http://www.ovirt.org/wiki/Vdsm_Standalone ?

On Wed, Feb 29, 2012 at 07:42:15PM +0530, Deepak C Shetty wrote:
On 02/27/2012 04:55 AM, Ayal Baron wrote:
Any help on documenting this so people would not have to dig into the code would be greatly appreciated.
Great, let me start and get back if i land up into issues. thanks for the pointer. Let us know if you need further assistance..
This is how I plan to attack it... using vdscli, let me know if my steps are correct ?
1) Use createStorageDomain to create a SHAREDFS_DOMAIN of domTypes data & iso 2) Use createStoragePool and associate the above sd's with this pool. 3) How to copy .iso into the newly create iso dom ? engine-iso-uploader won't know abt it, rite ?
I would've used `cp` (chown to make sure vdsm can read it when needed).
4) create a volume to represent my vm disk 5) Use create to create a VM and run it
Is this the recommended way ( to use individual vdscli cmds) or the way its done in http://www.ovirt.org/wiki/Vdsm_Standalone ?
For human-triggered setup, running vdsClient from bash may be easier. But the suggested python script is expected to take you slightly further on the road to reproducible testable application on top of Vdsm. If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.

http://www.ovirt.org/wiki/CLI -- Michael Pasternak RedHat, ENG-Virtualization R&D

On 02/29/2012 08:38 PM, Michael Pasternak wrote:
I would like to see something similar for VDSM cli (vdsClient) :)

On 03/01/2012 07:44 AM, Deepak C Shetty wrote:
On 02/29/2012 08:38 PM, Michael Pasternak wrote:
I would like to see something similar for VDSM cli (vdsClient) :)
vdsm-cli doesn't deal with users too much. we'll need a password file approach, session absed handling or kerberos to close this gap.

On 03/01/2012 09:31 AM, Itamar Heim wrote:
On 03/01/2012 07:44 AM, Deepak C Shetty wrote:
On 02/29/2012 08:38 PM, Michael Pasternak wrote:
I would like to see something similar for VDSM cli (vdsClient) :)
vdsm-cli doesn't deal with users too much. we'll need a password file approach, session absed handling or kerberos to close this gap.
adding this to my TODO list. -- Michael Pasternak RedHat, ENG-Virtualization R&D

On 03/01/2012 07:44 AM, Deepak C Shetty wrote:
On 02/29/2012 08:38 PM, Michael Pasternak wrote:
I would like to see something similar for VDSM cli (vdsClient) :)
if VDSM api will expose RSDL (RESTful service description language), we could generate SDK [1] for it and then very same cli can be used against this sdk [1] our ovirt-engine-sdk [2] is auto-generated from RSDL exposed in ovirt-engine-api [2] http://www.ovirt.org/wiki/SDK -- Michael Pasternak RedHat, ENG-Virtualization R&D

On Thu, Mar 01, 2012 at 09:38:41AM +0200, Michael Pasternak wrote:
On 03/01/2012 07:44 AM, Deepak C Shetty wrote:
On 02/29/2012 08:38 PM, Michael Pasternak wrote:
I would like to see something similar for VDSM cli (vdsClient) :)
if VDSM api will expose RSDL (RESTful service description language), we could generate SDK [1] for it and then very same cli can be used against this sdk
[1] our ovirt-engine-sdk [2] is auto-generated from RSDL exposed in ovirt-engine-api [2] http://www.ovirt.org/wiki/SDK
REST API for Vdsm is in the works, http://gerrit.ovirt.org/2021 but I am not sure that mixing the layers and having one cli to Engine and Vdsm makes sense. In fact, I would guess that Engine would not be surprised if you spring a VM below its feet, or migrate an existing VM. That's why vdsClient is considered and test and debug tool.

On 03/04/2012 04:51 PM, Dan Kenigsberg wrote:
On Thu, Mar 01, 2012 at 09:38:41AM +0200, Michael Pasternak wrote:
On 03/01/2012 07:44 AM, Deepak C Shetty wrote:
On 02/29/2012 08:38 PM, Michael Pasternak wrote:
I would like to see something similar for VDSM cli (vdsClient) :)
if VDSM api will expose RSDL (RESTful service description language), we could generate SDK [1] for it and then very same cli can be used against this sdk
[1] our ovirt-engine-sdk [2] is auto-generated from RSDL exposed in ovirt-engine-api [2] http://www.ovirt.org/wiki/SDK
REST API for Vdsm is in the works, http://gerrit.ovirt.org/2021 but I am not sure that mixing the layers and having one cli to Engine and Vdsm makes sense.
it's not mixing of two CLIs, if vdsm will expose RSDL, will be generated vdsm sdk, i.e cli will be completely different as it will work against vdsm sdk, all I'm saying that if vdsm will expose RSDL, it will get sdk/cli for free ...
In fact, I would guess that Engine would not be surprised if you spring a VM below its feet, or migrate an existing VM. That's why vdsClient is considered and test and debug tool.
-- Michael Pasternak RedHat, ENG-Virtualization R&D

On 02/29/2012 08:06 PM, Dan Kenigsberg wrote:
On Wed, Feb 29, 2012 at 07:42:15PM +0530, Deepak C Shetty wrote:
On 02/27/2012 04:55 AM, Ayal Baron wrote:
Any help on documenting this so people would not have to dig into the code would be greatly appreciated.
Great, let me start and get back if i land up into issues. thanks for the pointer. Let us know if you need further assistance.. This is how I plan to attack it... using vdscli, let me know if my steps are correct ?
1) Use createStorageDomain to create a SHAREDFS_DOMAIN of domTypes data& iso 2) Use createStoragePool and associate the above sd's with this pool. 3) How to copy .iso into the newly create iso dom ? engine-iso-uploader won't know abt it, rite ? I would've used `cp` (chown to make sure vdsm can read it when needed).
engine-iso-uploader create the needed dir structure (metadata, images, dom_md etc) and put the .iso in the right dir. Using `cp` I will have to do that myself, rite ?
4) create a volume to represent my vm disk 5) Use create to create a VM and run it
Is this the recommended way ( to use individual vdscli cmds) or the way its done in http://www.ovirt.org/wiki/Vdsm_Standalone ?
For human-triggered setup, running vdsClient from bash may be easier. But the suggested python script is expected to take you slightly further on the road to reproducible testable application on top of Vdsm.
If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.
Thanks, will try using the python script itself. When i use the script and create LOCALFS (and SHAREDFS in future) will these domains and hence VMs created on top of these domains be visible and manageable from OE side ?

----- Original Message -----
On 02/29/2012 08:06 PM, Dan Kenigsberg wrote:
On Wed, Feb 29, 2012 at 07:42:15PM +0530, Deepak C Shetty wrote:
On 02/27/2012 04:55 AM, Ayal Baron wrote:
Any help on documenting this so people would not have to dig into the code would be greatly appreciated.
Great, let me start and get back if i land up into issues. thanks for the pointer. Let us know if you need further assistance.. This is how I plan to attack it... using vdscli, let me know if my steps are correct ?
1) Use createStorageDomain to create a SHAREDFS_DOMAIN of domTypes data& iso 2) Use createStoragePool and associate the above sd's with this pool. 3) How to copy .iso into the newly create iso dom ? engine-iso-uploader won't know abt it, rite ? I would've used `cp` (chown to make sure vdsm can read it when needed).
engine-iso-uploader create the needed dir structure (metadata, images, dom_md etc) and put the .iso in the right dir. Using `cp` I will have to do that myself, rite ?
No. createStorageDomain does all that for you, engine-iso-uploader just copies the iso files and sets permissions (at least I hope that's all it does).
4) create a volume to represent my vm disk 5) Use create to create a VM and run it
Is this the recommended way ( to use individual vdscli cmds) or the way its done in http://www.ovirt.org/wiki/Vdsm_Standalone ?
For human-triggered setup, running vdsClient from bash may be easier. But the suggested python script is expected to take you slightly further on the road to reproducible testable application on top of Vdsm.
If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.
Thanks, will try using the python script itself. When i use the script and create LOCALFS (and SHAREDFS in future) will these domains and hence VMs created on top of these domains be visible and manageable from OE side ?

On 02/29/2012 08:06 PM, Dan Kenigsberg wrote:
On Wed, Feb 29, 2012 at 07:42:15PM +0530, Deepak C Shetty wrote:
On 02/27/2012 04:55 AM, Ayal Baron wrote:
Any help on documenting this so people would not have to dig into the code would be greatly appreciated.
Great, let me start and get back if i land up into issues. thanks for the pointer. Let us know if you need further assistance.. This is how I plan to attack it... using vdscli, let me know if my steps are correct ?
1) Use createStorageDomain to create a SHAREDFS_DOMAIN of domTypes data& iso 2) Use createStoragePool and associate the above sd's with this pool. 3) How to copy .iso into the newly create iso dom ? engine-iso-uploader won't know abt it, rite ? I would've used `cp` (chown to make sure vdsm can read it when needed).
4) create a volume to represent my vm disk 5) Use create to create a VM and run it
Is this the recommended way ( to use individual vdscli cmds) or the way its done in http://www.ovirt.org/wiki/Vdsm_Standalone ?
For human-triggered setup, running vdsClient from bash may be easier. But the suggested python script is expected to take you slightly further on the road to reproducible testable application on top of Vdsm.
If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.
Hi All Getting this error, while doing createStorageDomain for SHAREDFS From vdsm.log ------------------- Thread-29::DEBUG::2012-03-06 03:35:27,127::safelease::53::Storage.Misc.excCmd::(initLock) FAILED: <err> = "panic: [11002] can't open '%s': /rhev/data-center/mnt/llm65.in.ibm.com:dpkvol/ff214060-642d-43b7-ac51-23278371ee1f/dom_md/leases: (Invalid argument)\n"; <rc> = 255 Thread-29::WARNING::2012-03-06 03:35:27,127::safelease::55::ClusterLock::(initLock) could not initialise spm lease (255): [] Thread-29::WARNING::2012-03-06 03:35:27,127::sd::328::Storage.StorageDomain::(initSPMlease) lease did not initialize successfully Traceback (most recent call last): File "/usr/share/vdsm/storage/sd.py", line 324, in initSPMlease safelease.ClusterLock.initLock(self._getLeasesFilePath()) File "/usr/share/vdsm/storage/safelease.py", line 56, in initLock raise se.ClusterLockInitError() ClusterLockInitError: Could not initialize cluster lock: () Thread-29::INFO::2012-03-06 03:35:27,128::logUtils::39::dispatcher::(wrapper) Run and protect: createStorageDomain, Return response: None Getting the below error when trying to createStoragePool... From vdsm.log ------------------- Thread-35::DEBUG::2012-03-06 03:35:27,323::safelease::72::ClusterLock::(acquire) Acquiring cluster lock for domain ff214060-642d-43b7-ac51-23278371ee1f Thread-35::DEBUG::2012-03-06 03:35:27,323::safelease::81::Storage.Misc.excCmd::(acquire) '/usr/bin/sudo -n /usr/bin/setsid /usr/bin/ionice -c1 -n0 /bin/su vdsm -s /bin/sh -c "/usr/libexec/vdsm/spmprotect.sh start ff214060-642d-43b7-ac51-23278371ee1f 1000 5 /rhev/data-center/mnt/llm65.in.ibm.com:dpkvol/ff214060-642d-43b7-ac51-23278371ee1f/dom_md/leases 30000 1000 3"' (cwd /usr/libexec/vdsm) Thread-35::DEBUG::2012-03-06 03:35:27,374::safelease::81::Storage.Misc.excCmd::(acquire) FAILED: <err> = "panic: [11019] can't open '%s': /rhev/data-center/mnt/llm65.in.ibm.com:dpkvol/ff214060-642d-43b7-ac51-23278371ee1f/dom_md/leases: (Invalid argument)\n"; <rc> = 1 Thread-35::ERROR::2012-03-06 03:35:27,374::task::855::TaskManager.Task::(_setError) Task=`4d02e106-5d7c-4373-a834-5d5f4ea297be`::Unexpected error Traceback (most recent call last): File "/usr/share/vdsm/storage/task.py", line 863, in _run return fn(*args, **kargs) File "/usr/share/vdsm/logUtils.py", line 38, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/storage/hsm.py", line 694, in createStoragePool return sp.StoragePool(spUUID, self.taskMng).create(poolName, masterDom, domList, masterVersion, safeLease) File "/usr/share/vdsm/storage/sp.py", line 546, in create msd.acquireClusterLock(self.id) File "/usr/share/vdsm/storage/sd.py", line 379, in acquireClusterLock self._clusterLock.acquire(hostID) File "/usr/share/vdsm/storage/safelease.py", line 83, in acquire raise se.AcquireLockFailure(self._sdUUID, rc, out, err) AcquireLockFailure: Could not obtain lock: 'id=ff214060-642d-43b7-ac51-23278371ee1f, rc=1, out=[], err=["panic: [11019] can\'t open \'%s\': /rhev/data-center/mnt/llm65.in.ibm.com:dpkvol/ff214060-642d-43b7-ac51-23278371ee1f/dom_md/leases: (Invalid argument)"]' Thread-35::DEBUG::2012-03-06 03:35:27,376::task::874::TaskManager.Task::(_run) Task=`4d02e106-5d7c-4373-a834-5d5f4ea297be`::Task._run: 4d02e106-5d7c-4373-a834-5d5f4ea297be (6, '82350e39-5940-48c0-81b3-c9955ada0f08', 'my gluster pool', 'ff214060-642d-43b7-ac51-23278371ee1f', ['ff214060-642d-43b7-ac51-23278371ee1f'], 1) {} failed - stopping task for it. Thread-35::DEBUG::2012-03-06 03:35:27,381::resourceManager::562::ResourceManager::(releaseResource) No one is waiting for resource 'Storage.82350e39-5940-48c0-81b3-c9955ada0f08', Clearing records. Thread-35::ERROR::2012-03-06 03:35:27,381::dispatcher::90::Storage.Dispatcher.Protect::(run) {'status': {'message': 'Could not obtain lock: \'id=ff214060-642d-43b7-ac51-23278371ee1f, rc=1, out=[], err=["panic: [11019] can\\\'t open \\\'%s\\\': /rhev/data-center/mnt/llm65.in.ibm.com:dpkvol/ff214060-642d-43b7-ac51-23278371ee1f/dom_md/leases: (Invalid argument)"]\'', 'code': 651}}

On 02/29/2012 08:06 PM, Dan Kenigsberg wrote:
On Wed, Feb 29, 2012 at 07:42:15PM +0530, Deepak C Shetty wrote:
On 02/27/2012 04:55 AM, Ayal Baron wrote:
Any help on documenting this so people would not have to dig into the code would be greatly appreciated.
Great, let me start and get back if i land up into issues. thanks for the pointer. Let us know if you need further assistance..
If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.
I was able to use the VDSM_Standalone example, and modify it to use SHAREDFS and export glusterfs as a DATA_DOMAIN and invoke VM backed by gluster storage. I edited http://www.ovirt.org/wiki/Vdsm_Standalone (scroll below) & added it as a SHAREDFS example.

On Wed, Mar 14, 2012 at 07:04:47PM +0530, Deepak C Shetty wrote:
On 02/29/2012 08:06 PM, Dan Kenigsberg wrote:
On Wed, Feb 29, 2012 at 07:42:15PM +0530, Deepak C Shetty wrote:
On 02/27/2012 04:55 AM, Ayal Baron wrote:
Any help on documenting this so people would not have to dig into the code would be greatly appreciated.
Great, let me start and get back if i land up into issues. thanks for the pointer. Let us know if you need further assistance..
If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.
I was able to use the VDSM_Standalone example, and modify it to use SHAREDFS and export glusterfs as a DATA_DOMAIN and invoke VM backed by gluster storage.
I edited http://www.ovirt.org/wiki/Vdsm_Standalone (scroll below) & added it as a SHAREDFS example.
Thanks! It is great news that all you need is O_DIRECT in fuse, and that Vdsm's SHAREDFS interface works as it is for gluster. Regards, Dan.

On 03/19/2012 03:07 PM, Dan Kenigsberg wrote:
On Wed, Mar 14, 2012 at 07:04:47PM +0530, Deepak C Shetty wrote:
On 02/29/2012 08:06 PM, Dan Kenigsberg wrote:
Let us know if you need further assistance.. If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.
I was able to use the VDSM_Standalone example, and modify it to use SHAREDFS and export glusterfs as a DATA_DOMAIN and invoke VM backed by gluster storage.
I edited http://www.ovirt.org/wiki/Vdsm_Standalone (scroll below)& added it as a SHAREDFS example. Thanks! It is great news that all you need is O_DIRECT in fuse, and that Vdsm's SHAREDFS interface works as it is for gluster.
Regards, Dan.
Hello, Got more questions on this, now that I am re-visiting this. The last time i tried using SHAREDFS, i started from createStorageDomain verb, and it works fine. But now we have connectStorageServer verb... which i believe is the new way of doing things ? If i start from connectStorageServer verb to mount using SHAREDFS ( which goes via PosixFs... MountConnection flow), that won't help me entirely here, right ? Because it only mounts based on the dict sent, but does not do anything with the image and metadata stuff ( which createStorageDomain flow did ). I am wondering if its too early to start using connectStorageServer ? If not, how can i re-write the above vdsm standalone example using connectStorageServer instead of createStorageDomain flow ?

On 06/17/2012 03:50 PM, Deepak C Shetty wrote:
On 03/19/2012 03:07 PM, Dan Kenigsberg wrote:
On Wed, Mar 14, 2012 at 07:04:47PM +0530, Deepak C Shetty wrote:
On 02/29/2012 08:06 PM, Dan Kenigsberg wrote:
Let us know if you need further assistance.. If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.
I was able to use the VDSM_Standalone example, and modify it to use SHAREDFS and export glusterfs as a DATA_DOMAIN and invoke VM backed by gluster storage.
I edited http://www.ovirt.org/wiki/Vdsm_Standalone (scroll below)& added it as a SHAREDFS example. Thanks! It is great news that all you need is O_DIRECT in fuse, and that Vdsm's SHAREDFS interface works as it is for gluster.
Regards, Dan.
Hello, Got more questions on this, now that I am re-visiting this. The last time i tried using SHAREDFS, i started from createStorageDomain verb, and it works fine.
But now we have connectStorageServer verb... which i believe is the new way of doing things ? If i start from connectStorageServer verb to mount using SHAREDFS ( which goes via PosixFs... MountConnection flow), that won't help me entirely here, right ? Because it only mounts based on the dict sent, but does not do anything with the image and metadata stuff ( which createStorageDomain flow did ).
I am wondering if its too early to start using connectStorageServer ? If not, how can i re-write the above vdsm standalone example using connectStorageServer instead of createStorageDomain flow ?
Will a solution via engine (i.e - create PosixFS storage domain) be good for you?
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

On 06/17/2012 07:06 PM, Yair Zaslavsky wrote:
On 06/17/2012 03:50 PM, Deepak C Shetty wrote:
On 03/19/2012 03:07 PM, Dan Kenigsberg wrote:
On Wed, Mar 14, 2012 at 07:04:47PM +0530, Deepak C Shetty wrote:
On 02/29/2012 08:06 PM, Dan Kenigsberg wrote:
Let us know if you need further assistance.. If you have that python script working for LOCALFS, I'd suggest you try making it work for SHAREDFS too.
I was able to use the VDSM_Standalone example, and modify it to use SHAREDFS and export glusterfs as a DATA_DOMAIN and invoke VM backed by gluster storage.
I edited http://www.ovirt.org/wiki/Vdsm_Standalone (scroll below)& added it as a SHAREDFS example. Thanks! It is great news that all you need is O_DIRECT in fuse, and that Vdsm's SHAREDFS interface works as it is for gluster.
Regards, Dan.
Hello, Got more questions on this, now that I am re-visiting this. The last time i tried using SHAREDFS, i started from createStorageDomain verb, and it works fine.
But now we have connectStorageServer verb... which i believe is the new way of doing things ? If i start from connectStorageServer verb to mount using SHAREDFS ( which goes via PosixFs... MountConnection flow), that won't help me entirely here, right ? Because it only mounts based on the dict sent, but does not do anything with the image and metadata stuff ( which createStorageDomain flow did ).
I am wondering if its too early to start using connectStorageServer ? If not, how can i re-write the above vdsm standalone example using connectStorageServer instead of createStorageDomain flow ? Will a solution via engine (i.e - create PosixFS storage domain) be good for you?
Well, no. I kind of mis-interpreted the problem. My Q stands canceled. The right way indeed is to use createStorageServer followed by createStorageDomain, which i was confused initially a bit, but now its cleared.

On 06/17/2012 06:20 PM, Deepak C Shetty wrote:
Hello, Got more questions on this, now that I am re-visiting this. The last time i tried using SHAREDFS, i started from createStorageDomain verb, and it works fine.
But now we have connectStorageServer verb... which i believe is the new way of doing things ? If i start from connectStorageServer verb to mount using SHAREDFS ( which goes via PosixFs... MountConnection flow), that won't help me entirely here, right ? Because it only mounts based on the dict sent, but does not do anything with the image and metadata stuff ( which createStorageDomain flow did ).
I am wondering if its too early to start using connectStorageServer ? If not, how can i re-write the above vdsm standalone example using connectStorageServer instead of createStorageDomain flow ?
'Guess i got confused.. the standalone example does use connectStorageServer followed by createStorageDomain. Scratch the question .. my bad..
participants (6)
-
Ayal Baron
-
Dan Kenigsberg
-
Deepak C Shetty
-
Itamar Heim
-
Michael Pasternak
-
Yair Zaslavsky