Here is the engine.log info:<div><br></div><div><div>[root@ovirt ovirt-engine]# tail engine.log</div><div>2012-09-21 11:10:00,007 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Autorecovering 0 hosts</div>
<div>2012-09-21 11:10:00,007 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable hosts done</div><div>2012-09-21 11:10:00,008 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable storage domains</div>
<div>2012-09-21 11:10:00,009 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Autorecovering 0 storage domains</div><div>2012-09-21 11:10:00,010 INFO [org.ovirt.engine.core.bll.AutoRecoveryManager] (QuartzScheduler_Worker-49) Checking autorecoverable storage domains done</div>
<div>2012-09-21 11:10:22,710 ERROR [org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (QuartzScheduler_Worker-84) Failed to decryptData must not be longer than 256 bytes</div><div>2012-09-21 11:10:22,726 ERROR [org.ovirt.engine.core.engineencryptutils.EncryptionUtils] (QuartzScheduler_Worker-12) Failed to decryptData must start with zero</div>
<div>2012-09-21 11:10:54,519 INFO [org.ovirt.engine.core.bll.storage.RemoveStorageServerConnectionCommand] (ajp--0.0.0.0-8009-11) [3769be9c] Running command: RemoveStorageServerConnectionCommand internal: false. Entities affected : ID: aaa00000-0000-0000-0000-123456789aaa Type: System</div>
<div>2012-09-21 11:10:54,537 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-11) [3769be9c] START, DisconnectStorageServerVDSCommand(vdsId = 3822e6c0-0295-11e2-86e6-d74ad5358c03, storagePoolId = 00000000-0000-0000-0000-000000000000, storageType = NFS, connectionList = [{ id: null, connection: gfs1.bostonvineyard.org:/data };]), log id: 16dd4a1b</div>
<div>2012-09-21 11:10:56,417 INFO [org.ovirt.engine.core.vdsbroker.vdsbroker.DisconnectStorageServerVDSCommand] (ajp--0.0.0.0-8009-11) [3769be9c] FINISH, DisconnectStorageServerVDSCommand, return: {00000000-0000-0000-0000-000000000000=477}, log id: 16dd4a1b</div>
<div><br></div><div>Thanks,</div><div><br></div><div>dk</div><br><div class="gmail_quote">On Fri, Sep 21, 2012 at 11:09 AM, Dominic Kaiser <span dir="ltr"><<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">I can mount to another computer with this command:<div><br></div><div><div>mount -o mountproto=tcp,vers=3 -t nfs gfs1.bostonvineyard.org:/data /home/administrator/test</div>
<div><br></div><div>So volumes work but I get a 500 error timeout when trying to add as a storage domain in ovirt. weird?</div><span class="HOEnZb"><font color="#888888">
<div><br></div><div>dk</div></font></span><div><div class="h5"><br><div class="gmail_quote">On Fri, Sep 21, 2012 at 10:44 AM, Dominic Kaiser <span dir="ltr"><<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hey All,<div><br></div><div>So I finally found the problem. Cheap NIC's. Installed Intel NIC's no problems creating gluster volumes and distributed replicated ones. Broadcom and Realtek yuk! So now I am trying to mount the gluster volume as a nfs mount and am having a problem. It is timing out like it is blocked by a firewall.</div>
<div><br></div><div>I am trying to: mount -t nfs gfs1.bostonvineyard.org:/export /home/administrator/test</div><div><br></div><div>Here is gfs1 tail vdsm.log</div><div><br></div><div><div>[root@gfs1 vdsm]# tail vdsm.log</div>
<div>Thread-88731::DEBUG::2012-09-21 10:35:56,566::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div><div>Thread-88731::DEBUG::2012-09-21 10:35:56,567::task::978::TaskManager.Task::(_decref) Task=`01b69eed-de59-4e87-8b28-5268b5dcbb50`::ref 0 aborting False</div>
<div>Thread-88737::DEBUG::2012-09-21 10:36:06,890::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state init -> state preparing</div><div>Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::37::dispatcher::(wrapper) Run and protect: repoStats(options=None)</div>
<div>Thread-88737::INFO::2012-09-21 10:36:06,891::logUtils::39::dispatcher::(wrapper) Run and protect: repoStats, Return response: {}</div><div>Thread-88737::DEBUG::2012-09-21 10:36:06,891::task::1172::TaskManager.Task::(prepare) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::finished: {}</div>
<div>Thread-88737::DEBUG::2012-09-21 10:36:06,892::task::588::TaskManager.Task::(_updateState) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::moving from state preparing -> state finished</div><div>Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::809::ResourceManager.Owner::(releaseAll) Owner.releaseAll requests {} resources {}</div>
<div>Thread-88737::DEBUG::2012-09-21 10:36:06,892::resourceManager::844::ResourceManager.Owner::(cancelAll) Owner.cancelAll requests {}</div><div>Thread-88737::DEBUG::2012-09-21 10:36:06,893::task::978::TaskManager.Task::(_decref) Task=`f70222ad-f8b4-4733-9526-eff1d214ebd8`::ref 0 aborting False</div>
</div><div><br></div><div>Do you know why I can not connect via NFS? Using an older kernel not 3.5 and iptables are off.</div><span><font color="#888888"><div><br></div><div>Dominic</div></font></span><div>
<div><div><br></div><div><br><div class="gmail_quote">On Mon, Sep 10, 2012 at 12:20 PM, Haim Ateya <span dir="ltr"><<a href="mailto:hateya@redhat.com" target="_blank">hateya@redhat.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>On 09/10/2012 06:27 PM, Dominic Kaiser wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Here is the message and the logs again except zipped I failed the first delivery:<br>
<br>
Ok here are the logs 4 node and 1 engine log. Tried making /data folder owned by root and then tried by 36:36 neither worked. Name of volume is data to match folders on nodes also.<br>
<br>
Let me know what you think,<br>
<br>
Dominic<br>
</blockquote>
<br></div>
this is the actual failure (taken from gfs2vdsm.log).<br>
<br>
Thread-332442::DEBUG::2012-09-<u></u>10 10:28:05,788::BindingXMLRPC::<u></u>859::vds::(wrapper) client [10.3.0.241]::call volumeCreate with ('data', ['10.4.0.97:/data', '10.4.0.98:/data', '10.4.0.99:/data', '10.4.0.100:/data'],<br>
2, 0, ['TCP']) {} flowID [406f2c8e]<br>
MainProcess|Thread-332442::<u></u>DEBUG::2012-09-10 10:28:05,792::__init__::1249::<u></u>Storage.Misc.excCmd::(_log) '/usr/sbin/gluster --mode=script volume create data replica 2 transport TCP 10.4.0.97:/data 10.4.0.98:/data 10<br>
.4.0.99:/data 10.4.0.100:/data' (cwd None)<br>
MainProcess|Thread-332442::<u></u>DEBUG::2012-09-10 10:28:05,900::__init__::1249::<u></u>Storage.Misc.excCmd::(_log) FAILED: <err> = 'Host 10.4.0.99 not a friend\n'; <rc> = 255<br>
MainProcess|Thread-332442::<u></u>ERROR::2012-09-10 10:28:05,900::supervdsmServer:<u></u>:76::SuperVdsm.ServerCallback:<u></u>:(wrapper) Error in wrapper<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/<u></u>supervdsmServer.py", line 74, in wrapper<br>
return func(*args, **kwargs)<br>
File "/usr/share/vdsm/<u></u>supervdsmServer.py", line 286, in wrapper<br>
return func(*args, **kwargs)<br>
File "/usr/share/vdsm/gluster/cli.<u></u>py", line 46, in wrapper<br>
return func(*args, **kwargs)<br>
File "/usr/share/vdsm/gluster/cli.<u></u>py", line 176, in volumeCreate<br>
raise ge.<u></u>GlusterVolumeCreateFailedExcep<u></u>tion(rc, out, err)<br>
GlusterVolumeCreateFailedExcep<u></u>tion: Volume create failed<br>
error: Host 10.4.0.99 not a friend<br>
return code: 255<br>
Thread-332442::ERROR::2012-09-<u></u>10 10:28:05,901::BindingXMLRPC::<u></u>877::vds::(wrapper) unexpected error<br>
Traceback (most recent call last):<br>
File "/usr/share/vdsm/<u></u>BindingXMLRPC.py", line 864, in wrapper<br>
res = f(*args, **kwargs)<br>
File "/usr/share/vdsm/gluster/api.<u></u>py", line 32, in wrapper<br>
rv = func(*args, **kwargs)<br>
File "/usr/share/vdsm/gluster/api.<u></u>py", line 87, in volumeCreate<br>
transportList)<br>
File "/usr/share/vdsm/supervdsm.py"<u></u>, line 67, in __call__<br>
return callMethod()<br>
File "/usr/share/vdsm/supervdsm.py"<u></u>, line 65, in <lambda><br>
**kwargs)<br>
File "<string>", line 2, in glusterVolumeCreate<br>
File "/usr/lib64/python2.7/<u></u>multiprocessing/managers.py", line 759, in _callmethod<br>
kind, result = conn.recv()<br>
TypeError: ('__init__() takes exactly 4 arguments (1 given)', <class 'gluster.exception.<u></u>GlusterVolumeCreateFailedExcep<u></u>tion'>, ())<br>
<br>
can you please run gluster peer status on all your nodes ? also, it appears that '10.4.0.99' is problematic, can you try create the volume without it ?<br>
<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
<br>
On Mon, Sep 10, 2012 at 11:24 AM, Dominic Kaiser <<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a> <mailto:<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@<u></u>bostonvineyard.org</a>>> wrote:<br>
<br>
Here are the other two logs forgot them.<br>
<br>
dk<br>
<br>
<br>
On Mon, Sep 10, 2012 at 11:19 AM, Dominic Kaiser<br></div>
<<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a> <mailto:<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@<u></u>bostonvineyard.org</a>>><div>
<br>
wrote:<br>
<br>
Ok here are the logs 4 node and 1 engine log. Tried making<br>
/data folder owned by root and then tried by 36:36 neither<br>
worked. Name of volume is data to match folders on nodes also.<br>
<br>
Let me know what you think,<br>
<br>
Dominic<br>
<br>
<br>
On Thu, Sep 6, 2012 at 8:33 AM, Maxim Burgerhout<br></div><div>
<<a href="mailto:maxim@wzzrd.com" target="_blank">maxim@wzzrd.com</a> <mailto:<a href="mailto:maxim@wzzrd.com" target="_blank">maxim@wzzrd.com</a>>> wrote:<br>
<br>
I just ran into this as well, and it seems that you have<br>
to either reformat previously used gluster bricks or<br>
manually tweak some extended attributes.<br>
<br>
Maybe this helps you in setting up your gluster volume,<br>
Dominic?<br>
<br>
More info here:<br>
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=812214" target="_blank">https://bugzilla.redhat.com/<u></u>show_bug.cgi?id=812214</a><br>
<br>
<br>
Maxim Burgerhout<br></div>
<a href="mailto:maxim@wzzrd.com" target="_blank">maxim@wzzrd.com</a> <mailto:<a href="mailto:maxim@wzzrd.com" target="_blank">maxim@wzzrd.com</a>><div><br>
----------------<br>
EB11 5E56 E648 9D99 E8EF 05FB C513 6FD4 1302 B48A<br>
<br>
<br>
<br>
<br>
<br>
On Thu, Sep 6, 2012 at 7:50 AM, Shireesh Anjal<br></div><div>
<<a href="mailto:sanjal@redhat.com" target="_blank">sanjal@redhat.com</a> <mailto:<a href="mailto:sanjal@redhat.com" target="_blank">sanjal@redhat.com</a>>> wrote:<br>
<br>
Hi Dominic,<br>
<br>
Looking at the engine log immediately after trying to<br>
create the volume should tell us on which node the<br>
gluster volume creation was attempted. Then looking at<br>
the vdsm log on that node should help us identifying<br>
the exact reason for failure.<br>
<br>
In case this doesn't help you, can you please simulate<br>
the issue again and send back all the 5 log files?<br>
(engine.log from engine server and vdsm.log from the 4<br>
nodes)<br>
<br>
Regards,<br>
Shireesh<br>
<br>
<br>
On Wednesday 05 September 2012 11:50 PM, Dominic<br>
Kaiser wrote:<br>
</div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>
So I have a problem creating glusterfs volumes. Here<br>
is the install:<br>
<br></div>
1. Ovirt 3.1<br>
2. 4 Nodes are Fedora 17 with kernel 3.3.4 -<br>
5.fc17.x86_64<br>
3. 4 nodes peer joined and running<br>
4. 4 nodes added as hosts to ovirt<br>
5. created a directory on each node this path /data<br>
6. chmod 36.36 -R /data all nodes<br>
7. went back to ovirt and created a<div><br>
distributed/replicated volume and added the 4<br>
nodes with brick path of /data<br>
<br>
I received this error:<br>
<br>
Creation of Gluster Volume maingfs1 failed.<br>
<br>
I went and looked at the vdsm logs on the nodes and<br>
the ovirt server which did not say much. Where else<br>
should I look? Also this error is vague what does it<br>
mean?<br>
<br>
<br>
-- Dominic Kaiser<br>
Greater Boston Vineyard<br>
Director of Operations<br>
<br></div>
cell: <a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a> <tel:<a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a>><br>
fax: <a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a> <tel:<a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a>><br>
email: <a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a><br>
<mailto:<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@<u></u>bostonvineyard.org</a>><br>
<br>
<br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<u></u>mailman/listinfo/users</a><br>
</blockquote>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a> <mailto:<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a>><div><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<u></u>mailman/listinfo/users</a><br>
<br>
<br>
<br>
<br>
<br>
-- Dominic Kaiser<br>
Greater Boston Vineyard<br>
Director of Operations<br>
<br></div>
cell: <a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a> <tel:<a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a>><br>
fax: <a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a> <tel:<a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a>><br>
email: <a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a><br>
<mailto:<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@<u></u>bostonvineyard.org</a>><div><br>
<br>
<br>
<br>
<br>
<br>
-- Dominic Kaiser<br>
Greater Boston Vineyard<br>
Director of Operations<br>
<br></div>
cell: <a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a> <tel:<a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a>><br>
fax: <a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a> <tel:<a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a>><br>
email: <a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a> <mailto:<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@<u></u>bostonvineyard.org</a>><div>
<br>
<br>
<br>
<br>
<br>
<br>
-- <br>
Dominic Kaiser<br>
Greater Boston Vineyard<br>
Director of Operations<br>
<br>
cell: <a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a><br>
fax: <a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a><br></div>
email: <a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a> <mailto:<a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@<u></u>bostonvineyard.org</a>><div>
<br>
<br>
<br>
<br>
<br>
______________________________<u></u>_________________<br>
Users mailing list<br>
<a href="mailto:Users@ovirt.org" target="_blank">Users@ovirt.org</a><br>
<a href="http://lists.ovirt.org/mailman/listinfo/users" target="_blank">http://lists.ovirt.org/<u></u>mailman/listinfo/users</a><br>
</div></blockquote>
<br>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Dominic Kaiser<br>Greater Boston Vineyard<br>Director of Operations<br><br>cell: <a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a><br>
fax: <a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a><br>email: <a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a><br>
<br><br>
</div>
</div></div></blockquote></div><br><br clear="all"><div><br></div>-- <br>Dominic Kaiser<br>Greater Boston Vineyard<br>Director of Operations<br><br>cell: <a href="tel:617-230-1412" value="+16172301412" target="_blank">617-230-1412</a><br>
fax: <a href="tel:617-252-0238" value="+16172520238" target="_blank">617-252-0238</a><br>email: <a href="mailto:dominic@bostonvineyard.org" target="_blank">dominic@bostonvineyard.org</a><br>
<br><br>
</div></div></div>
</blockquote></div><br><br clear="all"><div><br></div>-- <br>Dominic Kaiser<br>Greater Boston Vineyard<br>Director of Operations<br><br>cell: 617-230-1412<br>fax: 617-252-0238<br>email: <a href="mailto:dominic@bostonvineyard.org">dominic@bostonvineyard.org</a><br>
<br><br>
</div>