Yes I can mount both to another computer. Just not to ovirt. I
noticed on the other computer which is Ubuntu 12.04 if you leave
mountproto=tcp out of the command it does not mount. Does engine
default to tcp?
I believe that the gluster nfs server only supports tcp. On my setup,
I've edited /etc/nfsmount.conf with Defaultvers=3, Nfsvers=3, and
Defaultproto=tcp
Dk
On Sep 21, 2012 6:36 PM, "Jason Brooks" <jbrooks(a)redhat.com
<mailto:jbrooks@redhat.com>> wrote:
On 09/21/2012 08:09 AM, Dominic Kaiser wrote:
I can mount to another computer with this command:
mount -o mountproto=tcp,vers=3 -t nfs
gfs1.bostonvineyard.org:/data
/home/administrator/test
I notice that in your previous message, citing the mount that
didn't work, you were mounting :/export, and above you're mounting
:/data. Can you also mount the export volume from another computer?
So volumes work but I get a 500 error timeout when trying to
add as a
storage domain in ovirt. weird?
dk
On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser
<dominic(a)bostonvineyard.org
<mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>> wrote:
Hey All,
So I finally found the problem. Cheap NIC's. Installed
Intel NIC's
no problems creating gluster volumes and distributed
replicated
ones. Broadcom and Realtek yuk! So now I am trying to
mount the
gluster volume as a nfs mount and am having a problem. It
is timing
out like it is blocked by a firewall.
I am trying to: mount -t nfs
gfs1.bostonvineyard.org:/__export
/home/administrator/test
Here is gfs1 tail vdsm.log
[root@gfs1 vdsm]# tail vdsm.log
Thread-88731::DEBUG::2012-09-__21
10:35:56,566::resourceManager:__:844::ResourceManager.Owner::(__cancelAll)
Owner.cancelAll requests {}
Thread-88731::DEBUG::2012-09-__21
10:35:56,567::task::978::__TaskManager.Task::(_decref)
Task=`01b69eed-de59-4e87-8b28-__5268b5dcbb50`::ref 0
aborting False
Thread-88737::DEBUG::2012-09-__21
10:36:06,890::task::588::__TaskManager.Task::(___updateState)
Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::moving from
state init
-> state preparing
Thread-88737::INFO::2012-09-21
10:36:06,891::logUtils::37::__dispatcher::(wrapper) Run
and protect:
repoStats(options=None)
Thread-88737::INFO::2012-09-21
10:36:06,891::logUtils::39::__dispatcher::(wrapper) Run
and protect:
repoStats, Return response: {}
Thread-88737::DEBUG::2012-09-__21
10:36:06,891::task::1172::__TaskManager.Task::(prepare)
Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::finished: {}
Thread-88737::DEBUG::2012-09-__21
10:36:06,892::task::588::__TaskManager.Task::(___updateState)
Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::moving from
state
preparing -> state finished
Thread-88737::DEBUG::2012-09-__21
10:36:06,892::resourceManager:__:809::ResourceManager.Owner::(__releaseAll)
Owner.releaseAll requests {} resources {}
Thread-88737::DEBUG::2012-09-__21
10:36:06,892::resourceManager:__:844::ResourceManager.Owner::(__cancelAll)
Owner.cancelAll requests {}
Thread-88737::DEBUG::2012-09-__21
10:36:06,893::task::978::__TaskManager.Task::(_decref)
Task=`f70222ad-f8b4-4733-9526-__eff1d214ebd8`::ref 0
aborting False
Do you know why I can not connect via NFS? Using an older
kernel
not 3.5 and iptables are off.
Dominic
On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya
<hateya(a)redhat.com <mailto:hateya@redhat.com>
<mailto:hateya@redhat.com <mailto:hateya@redhat.com>>> wrote:
On 09/10/2012 06:27 PM, Dominic Kaiser wrote:
Here is the message and the logs again except zipped I
failed the first delivery:
Ok here are the logs 4 node and 1 engine log.
Tried making
/data folder owned by root and then tried by 36:36
neither
worked. Name of volume is data to match folders
on nodes also.
Let me know what you think,
Dominic
this is the actual failure (taken from gfs2vdsm.log).
Thread-332442::DEBUG::2012-09-____10
10:28:05,788::BindingXMLRPC::____859::vds::(wrapper)
client
[10.3.0.241]::call volumeCreate with ('data',
['10.4.0.97:/data', '10.4.0.98:/data',
'10.4.0.99:/data',
'10.4.0.100:/data'],
2, 0, ['TCP']) {} flowID [406f2c8e]
MainProcess|Thread-332442::____DEBUG::2012-09-10
10:28:05,792::__init__::1249::____Storage.Misc.excCmd::(_log)
'/usr/sbin/gluster --mode=script volume create data
replica 2
transport TCP 10.4.0.97:/data 10.4.0.98:/data 10
.4.0.99:/data 10.4.0.100:/data' (cwd None)
MainProcess|Thread-332442::____DEBUG::2012-09-10
10:28:05,900::__init__::1249::____Storage.Misc.excCmd::(_log)
FAILED: <err> = 'Host 10.4.0.99 not a friend\n';
<rc>
= 255
MainProcess|Thread-332442::____ERROR::2012-09-10
10:28:05,900::supervdsmServer:____:76::SuperVdsm.__ServerCallback:__:(wrapper)
Error in wrapper
Traceback (most recent call last):
File "/usr/share/vdsm/____supervdsmServer.py", line
74, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/____supervdsmServer.py", line
286, in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/cli.____py", line 46,
in wrapper
return func(*args, **kwargs)
File "/usr/share/vdsm/gluster/cli.____py", line 176, in
volumeCreate
raise
ge.____GlusterVolumeCreateFailedExcep____tion(rc, out, err)
GlusterVolumeCreateFailedExcep____tion: Volume create
failed
error: Host 10.4.0.99 not a friend
return code: 255
Thread-332442::ERROR::2012-09-____10
10:28:05,901::BindingXMLRPC::____877::vds::(wrapper)
unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/____BindingXMLRPC.py", line
864, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.____py", line 32,
in wrapper
rv = func(*args, **kwargs)
File "/usr/share/vdsm/gluster/api.____py", line 87,
in volumeCreate
transportList)
File "/usr/share/vdsm/supervdsm.py"____, line 67,
in __call__
return callMethod()
File "/usr/share/vdsm/supervdsm.py"____, line 65,
in <lambda>
**kwargs)
File "<string>", line 2, in glusterVolumeCreate
File
"/usr/lib64/python2.7/____multiprocessing/managers.py",
line 759, in _callmethod
kind, result = conn.recv()
TypeError: ('__init__() takes exactly 4 arguments (1
given)',
<class
'gluster.exception.____GlusterVolumeCreateFailedExcep____tion'>,
())
can you please run gluster peer status on all your
nodes ?
also, it appears that '10.4.0.99' is problematic, can
you try
create the volume without it ?
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser
<dominic(a)bostonvineyard.org
<mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>
<mailto:dominic@
<mailto:dominic@>__bostonvineya__rd.org
<
http://bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>>> wrote:
Here are the other two logs forgot them.
dk
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser
<dominic(a)bostonvineyard.org
<mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>
<mailto:dominic@
<mailto:dominic@>__bostonvineya__rd.org
<
http://bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>>>
wrote:
Ok here are the logs 4 node and 1 engine log.
Tried making
/data folder owned by root and then tried
by 36:36
neither
worked. Name of volume is data to match
folders on
nodes also.
Let me know what you think,
Dominic
On Thu, Sep 6, 2012 at 8:33 AM, Maxim
Burgerhout
<maxim(a)wzzrd.com <mailto:maxim@wzzrd.com>
<mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>
<mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>
<mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>>> wrote:
I just ran into this as well, and it
seems that
you have
to either reformat previously used
gluster
bricks or
manually tweak some extended attributes.
Maybe this helps you in setting up
your gluster
volume,
Dominic?
More info here:
https://bugzilla.redhat.com/____show_bug.cgi?id=812214
<
https://bugzilla.redhat.com/__show_bug.cgi?id=812214>
<
https://bugzilla.redhat.com/__show_bug.cgi?id=812214
<
https://bugzilla.redhat.com/show_bug.cgi?id=812214>>
Maxim Burgerhout
maxim(a)wzzrd.com <mailto:maxim@wzzrd.com>
<mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>
<mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>
<mailto:maxim@wzzrd.com <mailto:maxim@wzzrd.com>>>
----------------
EB11 5E56 E648 9D99 E8EF 05FB C513
6FD4 1302 B48A
On Thu, Sep 6, 2012 at 7:50 AM,
Shireesh Anjal
<sanjal(a)redhat.com
<mailto:sanjal@redhat.com> <mailto:sanjal@redhat.com
<mailto:sanjal@redhat.com>>
<mailto:sanjal@redhat.com
<mailto:sanjal@redhat.com> <mailto:sanjal@redhat.com
<mailto:sanjal@redhat.com>>>> wrote:
Hi Dominic,
Looking at the engine log
immediately after
trying to
create the volume should tell us
on which
node the
gluster volume creation was
attempted. Then
looking at
the vdsm log on that node should
help us
identifying
the exact reason for failure.
In case this doesn't help you,
can you
please simulate
the issue again and send back all
the 5 log
files?
(engine.log from engine server
and vdsm.log
from the 4
nodes)
Regards,
Shireesh
On Wednesday 05 September 2012
11:50 PM,
Dominic
Kaiser wrote:
So I have a problem creating
glusterfs
volumes. Here
is the install:
1. Ovirt 3.1
2. 4 Nodes are Fedora 17
with kernel
3.3.4 -
5.fc17.x86_64
3. 4 nodes peer joined and
running
4. 4 nodes added as hosts to
ovirt
5. created a directory on
each node
this path /data
6. chmod 36.36 -R /data all
nodes
7. went back to ovirt and
created a
distributed/replicated
volume and
added the 4
nodes with brick path of
/data
I received this error:
Creation of Gluster Volume
maingfs1 failed.
I went and looked at the vdsm
logs on
the nodes and
the ovirt server which did
not say
much. Where else
should I look? Also this
error is
vague what does it
mean?
-- Dominic Kaiser
Greater Boston Vineyard
Director of Operations
cell: 617-230-1412
<tel:617-230-1412> <tel:617-230-1412 <tel:617-230-1412>>
<tel:617-230-1412 <tel:617-230-1412>
<tel:617-230-1412 <tel:617-230-1412>>>
fax: 617-252-0238
<tel:617-252-0238> <tel:617-252-0238 <tel:617-252-0238>>
<tel:617-252-0238 <tel:617-252-0238>
<tel:617-252-0238 <tel:617-252-0238>>>
email:
dominic(a)bostonvineyard.org <mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>
<mailto:dominic@
<mailto:dominic@>__bostonvineya__rd.org
<
http://bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>>
___________________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
<mailto:Users@ovirt.org
<mailto:Users@ovirt.org> <mailto:Users@ovirt.org
<mailto:Users@ovirt.org>>>
http://lists.ovirt.org/____mailman/listinfo/users
<
http://lists.ovirt.org/__mailman/listinfo/users>
<
http://lists.ovirt.org/__mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>>
___________________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>>
http://lists.ovirt.org/____mailman/listinfo/users
<
http://lists.ovirt.org/__mailman/listinfo/users>
<
http://lists.ovirt.org/__mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>>
-- Dominic Kaiser
Greater Boston Vineyard
Director of Operations
cell: 617-230-1412 <tel:617-230-1412>
<tel:617-230-1412 <tel:617-230-1412>>
<tel:617-230-1412 <tel:617-230-1412>
<tel:617-230-1412 <tel:617-230-1412>>>
fax: 617-252-0238 <tel:617-252-0238>
<tel:617-252-0238 <tel:617-252-0238>>
<tel:617-252-0238 <tel:617-252-0238>
<tel:617-252-0238 <tel:617-252-0238>>>
email: dominic(a)bostonvineyard.org
<mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>
<mailto:dominic@
<mailto:dominic@>__bostonvineya__rd.org
<
http://bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>>
-- Dominic Kaiser
Greater Boston Vineyard
Director of Operations
cell: 617-230-1412 <tel:617-230-1412>
<tel:617-230-1412 <tel:617-230-1412>> <tel:617-230-1412
<tel:617-230-1412>
<tel:617-230-1412 <tel:617-230-1412>>>
fax: 617-252-0238 <tel:617-252-0238>
<tel:617-252-0238 <tel:617-252-0238>> <tel:617-252-0238
<tel:617-252-0238>
<tel:617-252-0238 <tel:617-252-0238>>>
email: dominic(a)bostonvineyard.org
<mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>
<mailto:dominic@
<mailto:dominic@>__bostonvineya__rd.org
<
http://bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>>
--
Dominic Kaiser
Greater Boston Vineyard
Director of Operations
cell: 617-230-1412 <tel:617-230-1412>
<tel:617-230-1412 <tel:617-230-1412>>
fax: 617-252-0238 <tel:617-252-0238>
<tel:617-252-0238 <tel:617-252-0238>>
email: dominic(a)bostonvineyard.org
<mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>
<mailto:dominic@
<mailto:dominic@>__bostonvineya__rd.org
<
http://bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>>
___________________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
<mailto:Users@ovirt.org <mailto:Users@ovirt.org>>
http://lists.ovirt.org/____mailman/listinfo/users
<
http://lists.ovirt.org/__mailman/listinfo/users>
<
http://lists.ovirt.org/__mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>>
--
Dominic Kaiser
Greater Boston Vineyard
Director of Operations
cell: 617-230-1412 <tel:617-230-1412> <tel:617-230-1412
<tel:617-230-1412>>
fax: 617-252-0238 <tel:617-252-0238> <tel:617-252-0238
<tel:617-252-0238>>
email: dominic(a)bostonvineyard.org
<mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>
--
Dominic Kaiser
Greater Boston Vineyard
Director of Operations
cell: 617-230-1412 <tel:617-230-1412>
fax: 617-252-0238 <tel:617-252-0238>
email: dominic(a)bostonvineyard.org
<mailto:dominic@bostonvineyard.org>
<mailto:dominic@__bostonvineyard.org
<mailto:dominic@bostonvineyard.org>>
_________________________________________________
Users mailing list
Users(a)ovirt.org <mailto:Users@ovirt.org>
http://lists.ovirt.org/__mailman/listinfo/users
<
http://lists.ovirt.org/mailman/listinfo/users>
--
@jasonbrooks