Re: [Users] Problem with creating a glusterfs volume

Here is the message and the logs again except zipped I failed the first delivery: Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also. Let me know what you think, Dominic On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser <dominic@bostonvineyard.org
wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser < dominic@bostonvineyard.org> wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/show_bug.cgi?id=812214
Maxim Burgerhout maxim@wzzrd.com ---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com>wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
_______________________________________________ Users mailing listUsers@ovirt.orghttp://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org

On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log). Thread-332442::DEBUG::2012-09-10 10:28:05,788::BindingXMLRPC::859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::DEBUG::2012-09-10 10:28:05,792::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::DEBUG::2012-09-10 10:28:05,900::__init__::1249::Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::ERROR::2012-09-10 10:28:05,900::supervdsmServer::76::SuperVdsm.ServerCallback::(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.py", line 176, in volumeCreate raise ge.GlusterVolumeCreateFailedException(rc, out, err) GlusterVolumeCreateFailedException: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-10 10:28:05,901::BindingXMLRPC::877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py", line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py", line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.GlusterVolumeCreateFailedException'>, ()) can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>> wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/show_bug.cgi?id=812214
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com> ---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
_______________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/mailman/listinfo/users
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users

Hey All, So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall. I am trying to: mount -t nfs gfs1.bostonvineyard.org:/export /home/administrator/test Here is gfs1 tail vdsm.log [root@gfs1 vdsm]# tail vdsm.log Thread-88731::DEBUG::2012-09-21 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88731::DEBUG::2012-09-21 10:35:56,567::task::978::TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False Thread-88737::DEBUG::2012-09-21 10:36:06,890::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -> state preparing Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-88737::DEBUG::2012-09-21 10:36:06,891::task::1172::TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -> state finished Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88737::DEBUG::2012-09-21 10:36:06,893::task::978::TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off. Dominic On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <hateya@redhat.com> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-**10 10:28:05,788::BindingXMLRPC::**859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,792::__init__::1249::**Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,900::__init__::1249::**Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::**ERROR::2012-09-10 10:28:05,900::supervdsmServer:**:76::SuperVdsm.ServerCallback:**:(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/**supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/**supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 176, in volumeCreate raise ge.**GlusterVolumeCreateFailedExcep**tion(rc, out, err) GlusterVolumeCreateFailedExcep**tion: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-**10 10:28:05,901::BindingXMLRPC::**877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/**BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py"**, line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py"**, line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/**multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.**GlusterVolumeCreateFailedExcep**tion'>, ())
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser < dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/**show_bug.cgi?id=812214<https://bugzilla.redhat.com/show_bug.cgi?id=812214>
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com>
---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a
distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org <dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@** bostonvineyard.org <dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org

I can mount to another computer with this command: mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data /home/administrator/test So volumes work but I get a 500 error timeout when trying to add as a storage domain in ovirt. weird? dk On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser <dominic@bostonvineyard.org
wrote:
Hey All,
So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall.
I am trying to: mount -t nfs gfs1.bostonvineyard.org:/export /home/administrator/test
Here is gfs1 tail vdsm.log
[root@gfs1 vdsm]# tail vdsm.log Thread-88731::DEBUG::2012-09-21 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88731::DEBUG::2012-09-21 10:35:56,567::task::978::TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False Thread-88737::DEBUG::2012-09-21 10:36:06,890::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -> state preparing Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-88737::DEBUG::2012-09-21 10:36:06,891::task::1172::TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -> state finished Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88737::DEBUG::2012-09-21 10:36:06,893::task::978::TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False
Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off.
Dominic
On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <hateya@redhat.com> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-**10 10:28:05,788::BindingXMLRPC::**859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,792::__init__::1249::**Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,900::__init__::1249::**Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::**ERROR::2012-09-10 10:28:05,900::supervdsmServer:**:76::SuperVdsm.ServerCallback:**:(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/**supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/**supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 176, in volumeCreate raise ge.**GlusterVolumeCreateFailedExcep**tion(rc, out, err) GlusterVolumeCreateFailedExcep**tion: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-**10 10:28:05,901::BindingXMLRPC::**877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/**BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py"**, line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py"**, line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/**multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.**GlusterVolumeCreateFailedExcep**tion'>, ())
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser < dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/**show_bug.cgi?id=812214<https://bugzilla.redhat.com/show_bug.cgi?id=812214>
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com>
---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a
distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@** bostonvineyard.org <dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org

Here is the engine.log info: [root@ovirt ovirt-engine]# tail engine.log 2012-09-21 11:10:00,007 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Autorecovering 0 hosts 2012-09-21 11:10:00,007 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable hosts done 2012-09-21 11:10:00,008 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable storage domains 2012-09-21 11:10:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Autorecovering 0 storage domains 2012-09-21 11:10:00,010 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable storage domains done 2012-09-21 11:10:22,710 ERROR [org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (QuartzScheduler_Worker-84) Failed to decryptData must not be longer than 256 bytes 2012-09-21 11:10:22,726 ERROR [org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (QuartzScheduler_Worker-12) Failed to decryptData must start with zero 2012-09-21 11:10:54,519 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (ajp--0.0.0.0-8009-11) [3769be9c] Running command: RemoveStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System 2012-09-21 11:10:54,537 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-11) [3769be9c] START, DisconnectStorageServerVDSCommand(vdsId = 3822e6c0-0295-11e2-86e6-d74ad5358c03, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: null, connection: gfs1.bostonvineyard.org:/data };]), log id: 16dd4a1b 2012-09-21 11:10:56,417 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-11) [3769be9c] FINISH, DisconnectStorageServerVDSCommand, return: {00000000-0000-0000-0000-000000000000=477}, log id: 16dd4a1b Thanks, dk On Fri, Sep 21, 2012 at 11:09 AM, Dominic Kaiser <dominic@bostonvineyard.org
wrote:
I can mount to another computer with this command:
mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data /home/administrator/test
So volumes work but I get a 500 error timeout when trying to add as a storage domain in ovirt. weird?
dk
On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser < dominic@bostonvineyard.org> wrote:
Hey All,
So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall.
I am trying to: mount -t nfs gfs1.bostonvineyard.org:/export /home/administrator/test
Here is gfs1 tail vdsm.log
[root@gfs1 vdsm]# tail vdsm.log Thread-88731::DEBUG::2012-09-21 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88731::DEBUG::2012-09-21 10:35:56,567::task::978::TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False Thread-88737::DEBUG::2012-09-21 10:36:06,890::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -> state preparing Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-88737::DEBUG::2012-09-21 10:36:06,891::task::1172::TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -> state finished Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88737::DEBUG::2012-09-21 10:36:06,893::task::978::TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False
Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off.
Dominic
On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <hateya@redhat.com> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-**10 10:28:05,788::BindingXMLRPC::**859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,792::__init__::1249::**Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,900::__init__::1249::**Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::**ERROR::2012-09-10 10:28:05,900::supervdsmServer:**:76::SuperVdsm.ServerCallback:**:(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/**supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/**supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 176, in volumeCreate raise ge.**GlusterVolumeCreateFailedExcep**tion(rc, out, err) GlusterVolumeCreateFailedExcep**tion: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-**10 10:28:05,901::BindingXMLRPC::**877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/**BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py"**, line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py"**, line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/**multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.**GlusterVolumeCreateFailedExcep**tion'>, ())
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser < dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/**show_bug.cgi?id=812214<https://bugzilla.redhat.com/show_bug.cgi?id=812214>
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com>
---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a
distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@** bostonvineyard.org <dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org

I noticed something. If I am trying to mount the gluster share from another computer and do not include mounproto=tcp it times out. vers=3 or 4 does not matter. Could this be why I can not add it from the engine gui? dk On Fri, Sep 21, 2012 at 11:12 AM, Dominic Kaiser <dominic@bostonvineyard.org
wrote:
Here is the engine.log info:
[root@ovirt ovirt-engine]# tail engine.log 2012-09-21 11:10:00,007 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Autorecovering 0 hosts 2012-09-21 11:10:00,007 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable hosts done 2012-09-21 11:10:00,008 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable storage domains 2012-09-21 11:10:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Autorecovering 0 storage domains 2012-09-21 11:10:00,010 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable storage domains done 2012-09-21 11:10:22,710 ERROR [org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (QuartzScheduler_Worker-84) Failed to decryptData must not be longer than 256 bytes 2012-09-21 11:10:22,726 ERROR [org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (QuartzScheduler_Worker-12) Failed to decryptData must start with zero 2012-09-21 11:10:54,519 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (ajp--0.0.0.0-8009-11) [3769be9c] Running command: RemoveStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System 2012-09-21 11:10:54,537 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-11) [3769be9c] START, DisconnectStorageServerVDSCommand(vdsId = 3822e6c0-0295-11e2-86e6-d74ad5358c03, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: null, connection: gfs1.bostonvineyard.org:/data };]), log id: 16dd4a1b 2012-09-21 11:10:56,417 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-11) [3769be9c] FINISH, DisconnectStorageServerVDSCommand, return: {00000000-0000-0000-0000-000000000000=477}, log id: 16dd4a1b
Thanks,
dk
On Fri, Sep 21, 2012 at 11:09 AM, Dominic Kaiser < dominic@bostonvineyard.org> wrote:
I can mount to another computer with this command:
mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data /home/administrator/test
So volumes work but I get a 500 error timeout when trying to add as a storage domain in ovirt. weird?
dk
On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser < dominic@bostonvineyard.org> wrote:
Hey All,
So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall.
I am trying to: mount -t nfs gfs1.bostonvineyard.org:/export /home/administrator/test
Here is gfs1 tail vdsm.log
[root@gfs1 vdsm]# tail vdsm.log Thread-88731::DEBUG::2012-09-21 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88731::DEBUG::2012-09-21 10:35:56,567::task::978::TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False Thread-88737::DEBUG::2012-09-21 10:36:06,890::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -> state preparing Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-88737::DEBUG::2012-09-21 10:36:06,891::task::1172::TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -> state finished Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88737::DEBUG::2012-09-21 10:36:06,893::task::978::TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False
Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off.
Dominic
On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <hateya@redhat.com> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-**10 10:28:05,788::BindingXMLRPC::**859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,792::__init__::1249::**Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,900::__init__::1249::**Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::**ERROR::2012-09-10 10:28:05,900::supervdsmServer:**:76::SuperVdsm.ServerCallback:**:(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/**supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/**supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 176, in volumeCreate raise ge.**GlusterVolumeCreateFailedExcep**tion(rc, out, err) GlusterVolumeCreateFailedExcep**tion: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-**10 10:28:05,901::BindingXMLRPC::**877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/**BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py"**, line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py"**, line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/**multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.**GlusterVolumeCreateFailedExcep**tion'>, ())
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser < dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
>
wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/**show_bug.cgi?id=812214<https://bugzilla.redhat.com/show_bug.cgi?id=812214>
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com>
---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a
distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org> >
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@** bostonvineyard.org <dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org

Any ideas? Pretty please..... dk On Fri, Sep 21, 2012 at 11:51 AM, Dominic Kaiser <dominic@bostonvineyard.org
wrote:
I noticed something. If I am trying to mount the gluster share from another computer and do not include mounproto=tcp it times out. vers=3 or 4 does not matter. Could this be why I can not add it from the engine gui?
dk
On Fri, Sep 21, 2012 at 11:12 AM, Dominic Kaiser < dominic@bostonvineyard.org> wrote:
Here is the engine.log info:
[root@ovirt ovirt-engine]# tail engine.log 2012-09-21 11:10:00,007 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Autorecovering 0 hosts 2012-09-21 11:10:00,007 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable hosts done 2012-09-21 11:10:00,008 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable storage domains 2012-09-21 11:10:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Autorecovering 0 storage domains 2012-09-21 11:10:00,010 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable storage domains done 2012-09-21 11:10:22,710 ERROR [org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (QuartzScheduler_Worker-84) Failed to decryptData must not be longer than 256 bytes 2012-09-21 11:10:22,726 ERROR [org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (QuartzScheduler_Worker-12) Failed to decryptData must start with zero 2012-09-21 11:10:54,519 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (ajp--0.0.0.0-8009-11) [3769be9c] Running command: RemoveStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System 2012-09-21 11:10:54,537 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-11) [3769be9c] START, DisconnectStorageServerVDSCommand(vdsId = 3822e6c0-0295-11e2-86e6-d74ad5358c03, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: null, connection: gfs1.bostonvineyard.org:/data };]), log id: 16dd4a1b 2012-09-21 11:10:56,417 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-11) [3769be9c] FINISH, DisconnectStorageServerVDSCommand, return: {00000000-0000-0000-0000-000000000000=477}, log id: 16dd4a1b
Thanks,
dk
On Fri, Sep 21, 2012 at 11:09 AM, Dominic Kaiser < dominic@bostonvineyard.org> wrote:
I can mount to another computer with this command:
mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data /home/administrator/test
So volumes work but I get a 500 error timeout when trying to add as a storage domain in ovirt. weird?
dk
On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser < dominic@bostonvineyard.org> wrote:
Hey All,
So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall.
I am trying to: mount -t nfs gfs1.bostonvineyard.org:/export /home/administrator/test
Here is gfs1 tail vdsm.log
[root@gfs1 vdsm]# tail vdsm.log Thread-88731::DEBUG::2012-09-21 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88731::DEBUG::2012-09-21 10:35:56,567::task::978::TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False Thread-88737::DEBUG::2012-09-21 10:36:06,890::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -> state preparing Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-88737::DEBUG::2012-09-21 10:36:06,891::task::1172::TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -> state finished Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88737::DEBUG::2012-09-21 10:36:06,893::task::978::TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False
Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off.
Dominic
On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <hateya@redhat.com> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-**10 10:28:05,788::BindingXMLRPC::**859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,792::__init__::1249::**Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::**DEBUG::2012-09-10 10:28:05,900::__init__::1249::**Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::**ERROR::2012-09-10 10:28:05,900::supervdsmServer:**:76::SuperVdsm.ServerCallback:**:(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/**supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/**supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.**py", line 176, in volumeCreate raise ge.**GlusterVolumeCreateFailedExcep**tion(rc, out, err) GlusterVolumeCreateFailedExcep**tion: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-**10 10:28:05,901::BindingXMLRPC::**877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/**BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.**py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py"**, line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py"**, line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/**multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.**GlusterVolumeCreateFailedExcep**tion'>, ())
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser < dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org> >>
wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/**show_bug.cgi?id=812214<https://bugzilla.redhat.com/show_bug.cgi?id=812214>
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com>
---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
> So I have a problem creating glusterfs volumes. Here > is the install: > > 1. Ovirt 3.1 > 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - > 5.fc17.x86_64 > 3. 4 nodes peer joined and running > 4. 4 nodes added as hosts to ovirt > 5. created a directory on each node this path /data > 6. chmod 36.36 -R /data all nodes > 7. went back to ovirt and created a > > distributed/replicated volume and added the 4 > nodes with brick path of /data > > I received this error: > > Creation of Gluster Volume maingfs1 failed. > > I went and looked at the vdsm logs on the nodes and > the ovirt server which did not say much. Where else > should I look? Also this error is vague what does it > mean? > > > -- Dominic Kaiser > Greater Boston Vineyard > Director of Operations > > cell: 617-230-1412 <tel:617-230-1412> > fax: 617-252-0238 <tel:617-252-0238> > email: dominic@bostonvineyard.org > <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org> > > > > > > > ______________________________**_________________ > Users mailing list > Users@ovirt.org <mailto:Users@ovirt.org> > http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users> >
______________________________**_________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org> >
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@** bostonvineyard.org <dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org <mailto:dominic@** bostonvineyard.org <dominic@bostonvineyard.org>>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org
-- Dominic Kaiser Greater Boston Vineyard Director of Operations cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org

On 09/21/2012 08:09 AM, Dominic Kaiser wrote:
I can mount to another computer with this command:
mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data /home/administrator/test
I notice that in your previous message, citing the mount that didn't work, you were mounting :/export, and above you're mounting :/data. Can you also mount the export volume from another computer?
So volumes work but I get a 500 error timeout when trying to add as a storage domain in ovirt. weird?
dk
On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>> wrote:
Hey All,
So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall.
I am trying to: mount -t nfs gfs1.bostonvineyard.org:/export /home/administrator/test
Here is gfs1 tail vdsm.log
[root@gfs1 vdsm]# tail vdsm.log Thread-88731::DEBUG::2012-09-21 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88731::DEBUG::2012-09-21 10:35:56,567::task::978::TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False Thread-88737::DEBUG::2012-09-21 10:36:06,890::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -> state preparing Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-88737::DEBUG::2012-09-21 10:36:06,891::task::1172::TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -> state finished Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {} Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {} Thread-88737::DEBUG::2012-09-21 10:36:06,893::task::978::TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False
Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off.
Dominic
On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <hateya@redhat.com <mailto:hateya@redhat.com>> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-__10 10:28:05,788::BindingXMLRPC::__859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::__DEBUG::2012-09-10 10:28:05,792::__init__::1249::__Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::__DEBUG::2012-09-10 10:28:05,900::__init__::1249::__Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::__ERROR::2012-09-10 10:28:05,900::supervdsmServer:__:76::SuperVdsm.ServerCallback:__:(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/__supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/__supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.__py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.__py", line 176, in volumeCreate raise ge.__GlusterVolumeCreateFailedExcep__tion(rc, out, err) GlusterVolumeCreateFailedExcep__tion: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-__10 10:28:05,901::BindingXMLRPC::__877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/__BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.__py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.__py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py"__, line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py"__, line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/__multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.__GlusterVolumeCreateFailedExcep__tion'>, ())
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>>
wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/__show_bug.cgi?id=812214 <https://bugzilla.redhat.com/show_bug.cgi?id=812214>
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>
---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com> <mailto:sanjal@redhat.com <mailto:sanjal@redhat.com>>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a
distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org>
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users
-- @jasonbrooks

Yes I can mount both to another computer. Just not to ovirt. I noticed on the other computer which is Ubuntu 12.04 if you leave mountproto=tcp out of the command it does not mount. Does engine default to tcp? Dk On Sep 21, 2012 6:36 PM, "Jason Brooks" <jbrooks@redhat.com> wrote:
On 09/21/2012 08:09 AM, Dominic Kaiser wrote:
I can mount to another computer with this command:
mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data /home/administrator/test
I notice that in your previous message, citing the mount that didn't work, you were mounting :/export, and above you're mounting :/data. Can you also mount the export volume from another computer?
So volumes work but I get a 500 error timeout when trying to add as a storage domain in ovirt. weird?
dk
On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>>> wrote:
Hey All,
So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall.
I am trying to: mount -t nfs gfs1.bostonvineyard.org:/**export /home/administrator/test
Here is gfs1 tail vdsm.log
[root@gfs1 vdsm]# tail vdsm.log Thread-88731::DEBUG::2012-09-**21 10:35:56,566::resourceManager:**:844::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-88731::DEBUG::2012-09-**21 10:35:56,567::task::978::**TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-**5268b5dcbb50`::ref 0 aborting False Thread-88737::DEBUG::2012-09-**21 10:36:06,890::task::588::**TaskManager.Task::(_**updateState) Task=`f70222ad-f8b4-4733-9526-**eff1d214ebd8`::moving from state init -> state preparing Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::**dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::**dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-88737::DEBUG::2012-09-**21 10:36:06,891::task::1172::**TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-**eff1d214ebd8`::finished: {} Thread-88737::DEBUG::2012-09-**21 10:36:06,892::task::588::**TaskManager.Task::(_**updateState) Task=`f70222ad-f8b4-4733-9526-**eff1d214ebd8`::moving from state preparing -> state finished Thread-88737::DEBUG::2012-09-**21 10:36:06,892::resourceManager:**:809::ResourceManager.Owner::(** releaseAll) Owner.releaseAll requests {} resources {} Thread-88737::DEBUG::2012-09-**21 10:36:06,892::resourceManager:**:844::ResourceManager.Owner::(** cancelAll) Owner.cancelAll requests {} Thread-88737::DEBUG::2012-09-**21 10:36:06,893::task::978::**TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-**eff1d214ebd8`::ref 0 aborting False
Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off.
Dominic
On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <hateya@redhat.com <mailto:hateya@redhat.com>> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-**__10 10:28:05,788::BindingXMLRPC::_**_859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::__**DEBUG::2012-09-10 10:28:05,792::__init__::1249::**__Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::__**DEBUG::2012-09-10 10:28:05,900::__init__::1249::**__Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::__**ERROR::2012-09-10 10:28:05,900::supervdsmServer:**__:76::SuperVdsm.** ServerCallback:__:(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/__**supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/__**supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli._**_py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli._**_py", line 176, in volumeCreate raise ge.__**GlusterVolumeCreateFailedExcep**__tion(rc, out, err) GlusterVolumeCreateFailedExcep**__tion: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-**__10 10:28:05,901::BindingXMLRPC::_**_877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/__**BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api._**_py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api._**_py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py"**__, line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py"**__, line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/__**multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.__**GlusterVolumeCreateFailedExcep**__tion'>, ())
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineya**rd.org<http://bostonvineyard.org> <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineya**rd.org<http://bostonvineyard.org> <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/__**show_bug.cgi?id=812214<https://bugzilla.redhat.com/__show_bug.cgi?id=812214> <https://bugzilla.redhat.com/**show_bug.cgi?id=812214<https://bugzilla.redhat.com/show_bug.cgi?id=812214>
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>
---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com> <mailto:sanjal@redhat.com <mailto:sanjal@redhat.com>>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a
distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineya**rd.org<http://bostonvineyard.org> <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**___________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
______________________________**___________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineya**rd.org<http://bostonvineyard.org> <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineya**rd.org<http://bostonvineyard.org> <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineya**rd.org<http://bostonvineyard.org> <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**___________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__**mailman/listinfo/users<http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@** bostonvineyard.org <dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 fax: 617-252-0238 email: dominic@bostonvineyard.org <mailto:dominic@**bostonvineyard.org<dominic@bostonvineyard.org>
______________________________**_________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/**mailman/listinfo/users<http://lists.ovirt.org/mailman/listinfo/users>
--
@jasonbrooks

On Fri 21 Sep 2012 04:19:27 PM PDT, Dominic Kaiser wrote:
Yes I can mount both to another computer. Just not to ovirt. I noticed on the other computer which is Ubuntu 12.04 if you leave mountproto=tcp out of the command it does not mount. Does engine default to tcp?
I believe that the gluster nfs server only supports tcp. On my setup, I've edited /etc/nfsmount.conf with Defaultvers=3, Nfsvers=3, and Defaultproto=tcp
Dk
On Sep 21, 2012 6:36 PM, "Jason Brooks" <jbrooks@redhat.com <mailto:jbrooks@redhat.com>> wrote:
On 09/21/2012 08:09 AM, Dominic Kaiser wrote:
I can mount to another computer with this command:
mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data /home/administrator/test
I notice that in your previous message, citing the mount that didn't work, you were mounting :/export, and above you're mounting :/data. Can you also mount the export volume from another computer?
So volumes work but I get a 500 error timeout when trying to add as a storage domain in ovirt. weird?
dk
On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>> wrote:
Hey All,
So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall.
I am trying to: mount -t nfs gfs1.bostonvineyard.org:/__export /home/administrator/test
Here is gfs1 tail vdsm.log
[root@gfs1 vdsm]# tail vdsm.log Thread-88731::DEBUG::2012-09-__21
10:35:56,566::resourceManager:__:844::ResourceManager.Owner::(__cancelAll) Owner.cancelAll requests {} Thread-88731::DEBUG::2012-09-__21 10:35:56,567::task::978::__TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-__5268b5dcbb50`::ref 0 aborting False Thread-88737::DEBUG::2012-09-__21 10:36:06,890::task::588::__TaskManager.Task::(___updateState) Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::moving from state init -> state preparing Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::__dispatcher::(wrapper) Run and protect: repoStats(options=None) Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::__dispatcher::(wrapper) Run and protect: repoStats, Return response: {} Thread-88737::DEBUG::2012-09-__21 10:36:06,891::task::1172::__TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::finished: {} Thread-88737::DEBUG::2012-09-__21 10:36:06,892::task::588::__TaskManager.Task::(___updateState) Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::moving from state preparing -> state finished Thread-88737::DEBUG::2012-09-__21
10:36:06,892::resourceManager:__:809::ResourceManager.Owner::(__releaseAll) Owner.releaseAll requests {} resources {} Thread-88737::DEBUG::2012-09-__21
10:36:06,892::resourceManager:__:844::ResourceManager.Owner::(__cancelAll) Owner.cancelAll requests {} Thread-88737::DEBUG::2012-09-__21 10:36:06,893::task::978::__TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::ref 0 aborting False
Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off.
Dominic
On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <hateya@redhat.com <mailto:hateya@redhat.com> <mailto:hateya@redhat.com <mailto:hateya@redhat.com>>> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I failed the first delivery:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-____10 10:28:05,788::BindingXMLRPC::____859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'], 2, 0, ['TCP']) {} flowID [406f2c8e] MainProcess|Thread-332442::____DEBUG::2012-09-10
10:28:05,792::__init__::1249::____Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10 .4.0.99:/data 10.4.0.100:/data' (cwd None) MainProcess|Thread-332442::____DEBUG::2012-09-10
10:28:05,900::__init__::1249::____Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255 MainProcess|Thread-332442::____ERROR::2012-09-10
10:28:05,900::supervdsmServer:____:76::SuperVdsm.__ServerCallback:__:(wrapper) Error in wrapper Traceback (most recent call last): File "/usr/share/vdsm/____supervdsmServer.py", line 74, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/____supervdsmServer.py", line 286, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.____py", line 46, in wrapper return func(*args, **kwargs) File "/usr/share/vdsm/gluster/cli.____py", line 176, in volumeCreate raise ge.____GlusterVolumeCreateFailedExcep____tion(rc, out, err) GlusterVolumeCreateFailedExcep____tion: Volume create failed error: Host 10.4.0.99 not a friend return code: 255 Thread-332442::ERROR::2012-09-____10 10:28:05,901::BindingXMLRPC::____877::vds::(wrapper) unexpected error Traceback (most recent call last): File "/usr/share/vdsm/____BindingXMLRPC.py", line 864, in wrapper res = f(*args, **kwargs) File "/usr/share/vdsm/gluster/api.____py", line 32, in wrapper rv = func(*args, **kwargs) File "/usr/share/vdsm/gluster/api.____py", line 87, in volumeCreate transportList) File "/usr/share/vdsm/supervdsm.py"____, line 67, in __call__ return callMethod() File "/usr/share/vdsm/supervdsm.py"____, line 65, in <lambda> **kwargs) File "<string>", line 2, in glusterVolumeCreate File "/usr/lib64/python2.7/____multiprocessing/managers.py", line 759, in _callmethod kind, result = conn.recv() TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class
'gluster.exception.____GlusterVolumeCreateFailedExcep____tion'>, ())
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>> <mailto:dominic@ <mailto:dominic@>__bostonvineya__rd.org <http://bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser <dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>> <mailto:dominic@ <mailto:dominic@>__bostonvineya__rd.org <http://bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>>>
wrote:
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout <maxim@wzzrd.com <mailto:maxim@wzzrd.com> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>>> wrote:
I just ran into this as well, and it seems that you have to either reformat previously used gluster bricks or manually tweak some extended attributes.
Maybe this helps you in setting up your gluster volume, Dominic?
More info here: https://bugzilla.redhat.com/____show_bug.cgi?id=812214 <https://bugzilla.redhat.com/__show_bug.cgi?id=812214>
<https://bugzilla.redhat.com/__show_bug.cgi?id=812214 <https://bugzilla.redhat.com/show_bug.cgi?id=812214>>
Maxim Burgerhout maxim@wzzrd.com <mailto:maxim@wzzrd.com> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com> <mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>>
---------------- EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal <sanjal@redhat.com <mailto:sanjal@redhat.com> <mailto:sanjal@redhat.com <mailto:sanjal@redhat.com>> <mailto:sanjal@redhat.com <mailto:sanjal@redhat.com> <mailto:sanjal@redhat.com <mailto:sanjal@redhat.com>>>> wrote:
Hi Dominic,
Looking at the engine log immediately after trying to create the volume should tell us on which node the gluster volume creation was attempted. Then looking at the vdsm log on that node should help us identifying the exact reason for failure.
In case this doesn't help you, can you please simulate the issue again and send back all the 5 log files? (engine.log from engine server and vdsm.log from the 4 nodes)
Regards, Shireesh
On Wednesday 05 September 2012 11:50 PM, Dominic Kaiser wrote:
So I have a problem creating glusterfs volumes. Here is the install:
1. Ovirt 3.1 2. 4 Nodes are Fedora 17 with kernel 3.3.4 - 5.fc17.x86_64 3. 4 nodes peer joined and running 4. 4 nodes added as hosts to ovirt 5. created a directory on each node this path /data 6. chmod 36.36 -R /data all nodes 7. went back to ovirt and created a
distributed/replicated volume and added the 4 nodes with brick path of /data
I received this error:
Creation of Gluster Volume maingfs1 failed.
I went and looked at the vdsm logs on the nodes and the ovirt server which did not say much. Where else should I look? Also this error is vague what does it mean?
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> <tel:617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> <tel:617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>>> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>> <mailto:dominic@ <mailto:dominic@>__bostonvineya__rd.org <http://bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>>
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users>
<http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> <mailto:Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> <tel:617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> <tel:617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>>> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>> <mailto:dominic@ <mailto:dominic@>__bostonvineya__rd.org <http://bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> <tel:617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> <tel:617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>>> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>> <mailto:dominic@ <mailto:dominic@>__bostonvineya__rd.org <http://bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>> <mailto:dominic@ <mailto:dominic@>__bostonvineya__rd.org <http://bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>>
___________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> <mailto:Users@ovirt.org <mailto:Users@ovirt.org>> http://lists.ovirt.org/____mailman/listinfo/users <http://lists.ovirt.org/__mailman/listinfo/users> <http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>> fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>
-- Dominic Kaiser Greater Boston Vineyard Director of Operations
cell: 617-230-1412 <tel:617-230-1412> fax: 617-252-0238 <tel:617-252-0238> email: dominic@bostonvineyard.org <mailto:dominic@bostonvineyard.org> <mailto:dominic@__bostonvineyard.org <mailto:dominic@bostonvineyard.org>>
_________________________________________________ Users mailing list Users@ovirt.org <mailto:Users@ovirt.org> http://lists.ovirt.org/__mailman/listinfo/users <http://lists.ovirt.org/mailman/listinfo/users>
--
@jasonbrooks
-- @jasonbrooks
participants (3)
-
Dominic Kaiser
-
Haim Ateya
-
Jason Brooks