Again it got cut off ???
I've never really understood how to get a mailing list thread to
follow a reply correctly either...
# nfs_mount_options = soft,nosharecache
to
nfs_mount_options = soft,nosharecache,vers=3
- then re attached the node - it now shows as nfs3 using the mount
command and has the correct UID/GID i.e
ls -al
/rhev/data-center/b2b2e054-66b2-11e1-bda3-1728f784de9e/2f7ee7bc-09b3-42ba-af91-40d79293e360/images/7e2f025e-5676-4592-84fd-7c9661f5ee2e/
total 2097168
drwxr-xr-x. 2 vdsm kvm 4096 Mar 28 15:23 .
drwxr-xr-x. 3 vdsm kvm 4096 Mar 28 15:23 ..
-rw-rw----. 1 vdsm kvm 2147483648 Mar 28 15:23
e4aa8bf8-0cb8-4f1c-84cf-b83909f7206b
-rw-r--r--. 1 vdsm kvm 320 Mar 28 15:23
e4aa8bf8-0cb8-4f1c-84cf-b83909f7206b.meta
It still doesn't work...
Cheers
On 29 March 2012 10:17, Morgan Cox <morgancoxuk(a)gmail.com> wrote:
Sorry my last email cut off...
Hi
Just to update this.
On the NFS server I created vdsm:kvm user/group
Also on my debian NFS server I have in /etc/default/nfs-kernel-server
RPCMOUNTDOPTS="--manage-gids --no-nfs-version 4"
But the node always mounted in nfs4 (according to the mount command)
From advise in IRC I changed
# nfs_mount_options = soft,nosharecache
to
nfs_mount_options = soft,nosharecache,vers=3
- then re attached the node - it now shows as nfs3 using the mount
command and has the correct UID/GID i.e
ls -al
/rhev/data-center/b2b2e054-66b2-11e1-bda3-1728f784de9e/2f7ee7bc-09b3-42ba-af91-40d79293e360/images/7e2f025e-5676-4592-84fd-7c9661f5ee2e/
total 2097168
drwxr-xr-x. 2 vdsm kvm 4096 Mar 28 15:23 .
drwxr-xr-x. 3 vdsm kvm 4096 Mar 28 15:23 ..
-rw-rw----. 1 vdsm kvm 2147483648 Mar 28 15:23
e4aa8bf8-0cb8-4f1c-84cf-b83909f7206b
-rw-r--r--. 1 vdsm kvm 320 Mar 28 15:23
e4aa8bf8-0cb8-4f1c-84cf-b83909f7206b.meta
It still doesn't work...
Cheers
On 28 March 2012 16:52, Morgan Cox <morgancoxuk(a)gmail.com> wrote:
> Hi
>
> Just to update this.
>
> On the NFS server I created vdsm:kvm user/group
>
> Also on my debian NFS server I have in /etc/default/nfs-kernel-server
>
> RPCMOUNTDOPTS="--manage-gids --no-nfs-version 4"
>
> But the node always mounted in nfs4 (according to the mount command)
>
> From advise in IRC I changed
>
> # nfs_mount_options = soft,nosharecache
>
> to
>
> nfs_mount_options = soft,nosharecache,vers=3
>
> - then re attached the node - it now shows as nfs3 using the mount
> command and has the correct UID/GID i.e
>
> ls -al
/rhev/data-center/b2b2e054-66b2-11e1-bda3-1728f784de9e/2f7ee7bc-09b3-42ba-af91-40d79293e360/images/7e2f025e-5676-4592-84fd-7c9661f5ee2e/
> total 2097168
> drwxr-xr-x. 2 vdsm kvm 4096 Mar 28 15:23 .
> drwxr-xr-x. 3 vdsm kvm 4096 Mar 28 15:23 ..
> -rw-rw----. 1 vdsm kvm 2147483648 Mar 28 15:23
> e4aa8bf8-0cb8-4f1c-84cf-b83909f7206b
> -rw-r--r--. 1 vdsm kvm 320 Mar 28 15:23
> e4aa8bf8-0cb8-4f1c-84cf-b83909f7206b.meta
>
> It still doesn't work...
>
> Cheers
>
>
>
>
>
> On 28 March 2012 16:26, Keith Robertson <kroberts(a)redhat.com> wrote:
>> Morgan,
>>
>> I did some googling and it looks like that ID, i.e. 4294967294, is the
>> nfsnobody ID. There are various posts on the debian forums related to it.
>>
>> One suspicion I have is that your debian server is running NFSv4. Try
>> turning it off [1] as oVirt doesn't currently support it [2].
>>
>> [1]
>> "RPCMOUNTDOPTS=--no-nfs-version 4":
>> http://lists.debian.org/debian-user/2011/11/msg01892.html
>> [2]
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues
>>
>> Cheers,
>> Keith
>>
>>
>> On 03/28/2012 09:51 AM, Keith Robertson wrote:
>>>
>>> On 03/28/2012 09:30 AM, Morgan Cox wrote:
>>>>
>>>> Hi.
>>>>
>>>> My setup is 3 servers
>>>>
>>>> 1. Frontend (engine).
>>>> 2 Ovirt node
>>>> 3. Nfs server (debian)
>>>>
>>>> From the Frontend (engine) :-
>>>>
>>>> -bash-4.2$ ls -la
>>>>
>>>>
/tmp/test/2f7ee7bc-09b3-42ba-af91-40d79293e360/images/d1fcc1ae-dcf5-426e-a99f-3a48e84c5ae3/
>>>> total 2097168
>>>> drwxr-xr-x. 2 vdsm kvm 4096 Mar 28 2012 .
>>>> drwxr-xr-x. 3 vdsm kvm 4096 Mar 28 2012 ..
>>>> -rw-rw----. 1 vdsm kvm 2147483648 Mar 28 2012
>>>> 9fe193c9-7139-4a6c-933a-c6f31d5e96bd
>>>> -rw-r--r--. 1 vdsm kvm 317 Mar 28 2012
>>>> 9fe193c9-7139-4a6c-933a-c6f31d5e96bd.meta
>>>>
>>>>
>>>> I have (now) write access with vdsm user from here
>>>>
>>>> From the node (look at ownership....) :-
>>>>
>>>>
>>>> -bash-4.2$ ls -la
>>>>
>>>>
/rhev/data-center/b2b2e054-66b2-11e1-bda3-1728f784de9e/2f7ee7bc-09b3-42ba-af91-40d79293e360/images/d1fcc1ae-dcf5-426e-a99f-3a48e84c5ae3/
>>>> total 2097168
>>>> drwxr-xr-x. 2 4294967294 4294967294 4096 Mar 28 13:21 .
>>>> drwxr-xr-x. 3 4294967294 4294967294 4096 Mar 28 13:05 ..
>>>> -rw-rw----. 1 4294967294 4294967294 2147483648 Mar 28 13:05
>>>> 9fe193c9-7139-4a6c-933a-c6f31d5e96bd
>>>> -rw-r--r--. 1 4294967294 4294967294 317 Mar 28 13:05
>>>> 9fe193c9-7139-4a6c-933a-c6f31d5e96bd.meta
>>>>
>>> Morgan,
>>> Look at the UID/GID here. They are definitely not 36:36; hence, the user
>>> VDSM *might* have an issue with R/W privileged from the node.
>>>
>>> Can the user VDSM R/W anything in the mounted directory from the node?
>>> Also, if you don't want to create a UID/GID combination on the NFS
server
>>> of 36:36 try pinning it...
>>> $ cat /etc/exports
>>> /virt/iso
>>> 192.168.122.11(rw,sync,all_squash,anonuid=107,anongid=107) <-- I'm
pinning
>>> to 107 because I don't want 36:36 on my NFS server.
>>>
>>>
>>>
>>>
>>>> Also
>>>>
>>>> From the engine - mount command:-
>>>>
>>>> 10.0.0.190:/storage1/ on /tmp/test type nfs
>>>>
>>>>
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.0.0.190,mountvers=3,mountport=35065,mountproto=udp,local_lock=none,addr=10.0.0.190)
>>>>
>>>> - note that the NFS share wasn't mounted until I mounted it...
>>>>
>>>> From the node - mount command
>>>>
>>>> 10.0.0.190:/storage1/ on /rhev/data-center/mnt/10.0.0.190:_storage1
>>>> type nfs4
>>>>
(rw,relatime,vers=4,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,port=0,timeo=600,retrans=6,sec=sys,clientaddr=10.0.0.101,minorversion=0,local_lock=none,addr=10.0.0.190)
>>>>
>>>> - looks like nfs4 on the node........
>>>>
>>>> Any ideas anyone ?
>>>>
>>>> Cheers
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On 28 March 2012 06:45, Deepak C
Shetty<deepakcs(a)linux.vnet.ibm.com>
>>>> wrote:
>>>>>
>>>>> On 03/27/2012 04:08 PM, Morgan Cox wrote:
>>>>>
>>>>> Hi
>>>>>
>>>>>
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues
>>>>>
>>>>> - This seems to refer to setup on a redhat based system
>>>>>
>>>>> The fies it refers to do not exist on Debian - i.e
>>>>>
>>>>> /etc/sysconfig/nfs and/etc/nfsmount.conf - they do not exist...
>>>>>
>>>>> Do you know the equivalent on Debian ?
>>>>>
>>>>>
>>>>> Sorry, i use fedora, and I dont know the equivalent of Debain.
>>>>> Maybe asking the big brain(google) might help. There should be an
>>>>> equivalent way of doing the same on debian.
>>>>>
>>>>> Also in my case v4 was not a issue, as much as the perms and uid:gid
>>>>> were, so look further to see perms and selinux booleans are
>>>>> set correctly in your setup.
>>>>>
>>>>>
>>>>> Regards
>>>>>
>>>>>
>>>>> On 27 March 2012 07:07, Deepak C
Shetty<deepakcs(a)linux.vnet.ibm.com>
>>>>> wrote:
>>>>>>
>>>>>> On 03/26/2012 05:28 PM, Morgan Cox wrote:
>>>>>>
>>>>>> Hi.
>>>>>>
>>>>>> Still not actually managed to test a vm... I can start a VM (and
use
>>>>>> spice) without a virtual disk.
>>>>>>
>>>>>> However as soon as I add a virtual disk the VM no longer starts
>>>>>>
>>>>>>
>>>>>> Go thru this and see if it helps...
>>>>>>
http://www.ovirt.org/wiki/Troubleshooting_NFS_Storage_Issues
>>>>>>
>>>>>>
>>>>>> example error message
>>>>>>
>>>>>>
>>>>>>
>>>>>>
-------------------------------------------------------------------------------------------------------------------
>>>>>> VM ddf is down. Exit message internal error process exited while
>>>>>> connecting to monitor: qemu-kvm: -drive
>>>>>>
>>>>>>
file=/rhev/data-center/b2b2e054-66b2-11e1-bda3-1728f784de9e/2f7ee7bc-09b3-42ba-af91-40d79293e360/images/c507a2bc-38c5-498e-88c0-7cce8169cf67/6c36c8e4-6618-42f3-9cc4-06d2bccdc9cf,if=none,id=drive-virtio-disk0,format=raw,serial=c507a2bc-38c5-498e-88c0-7cce8169cf67,cache=none,werror=stop,rerror=stop,aio=threads:
>>>>>> could not open disk image
>>>>>>
>>>>>>
/rhev/data-center/b2b2e054-66b2-11e1-bda3-1728f784de9e/2f7ee7bc-09b3-42ba-af91-40d79293e360/images/c507a2bc-38c5-498e-88c0-7cce8169cf67/6c36c8e4-6618-42f3-9cc4-06d2bccdc9cf:
>>>>>> Permission denied .
>>>>>>
>>>>>>
>>>>>>
-------------------------------------------------------------------------------------------------------------------
>>>>>>
>>>>>> I have noticed that the
>>>>>> directory /rhev/data-center/b2b2e054-66b2-11e1-bda3-1728f784de9e/
is
>>>>>> not
>>>>>> writable via the ovirt node server.
>>>>>>
>>>>>> I am using a separate NFS server for storage.
>>>>>>
>>>>>> Is this a bug ?
>>>>>>
>>>>>> Does anyone know how to fix this ?
>>>>>>
>>>>>> shall I report it ?
>>>>>>
>>>>>> Many regards !
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Users mailing list
>>>>>> Users(a)ovirt.org
>>>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>>
>>