----- Original Message -----
> From: "Roy Golan" <rgolan(a)redhat.com>
> To: "Mikola Rose" <mrose(a)power-soft.com>, "Maor Lipchuk"
<mlipchuk(a)redhat.com>
> Cc: users(a)ovirt.org
> Sent: Wednesday, January 28, 2015 12:02:50 PM
> Subject: Re: [ovirt-users] oVirt 3.5.1 - VM "hostedengine" Failing to
start
>
> On 01/28/2015 11:04 AM, Mikola Rose wrote:
>> -rwxr-xr-x 1 vdsm kvm 0 Jan 28 00:59 /Volumes/Raid1/__DIRECT_IO_TEST__
>> -rwxr-xr-x 1 vdsm kvm 0 Jan 28 00:55
>> /Volumes/Raid1/data/__DIRECT_IO_TEST__
>> -rwxrwxrwx 1 vdsm kvm 0 Jan 28 00:55
>> /Volumes/Raid1/iso/__DIRECT_IO_TEST__
>> -rwxr-xr-x 1 vdsm kvm 0 Jan 26 09:43
>> /Volumes/Raid1/vm/__DIRECT_IO_TEST__
>>
>> Note, since removing the file the hosted_engine1 is starting up and
>> everything seems fine but then all of a sudden it restarts.
>>
> please attach some logs.
>> Also may be related...
>>
>> root@powerhost1 ~]# engine-iso-uploader --ssh-user=36 upload -i iso
>> rhel-server-6.6-x86_64-dvd.iso
>> Please provide the REST API password for the admin@internal oVirt
>> Engine user (CTRL+D to abort):
>> Uploading, please wait...
>> INFO: Start uploading rhel-server-6.6-x86_64-dvd.iso
>> ERROR: Unable to copy rhel-server-6.6-x86_64-dvd.iso to ISO storage
>> domain on iso.
>> ERROR: Error message is "unable to test the available space on
>> /Volumes/Raid1/iso"
>
> Maor?
Hi,
I think this is related to the engine-iso-uploader tool,
Sandro, do you have any insights about this?
Please don't use UIDs like 36 as user.
a command line like:
ovirt-iso-uploader --ssh-user=root upload -i iso rhel-server-6.6-x86_64-dvd.iso
or like
ovirt-iso-uploader --ssh-user=your_user_in_kvm_group upload -i iso
rhel-server-6.6-x86_64-dvd.iso
should work.
Regards,
Maor
>>
>>
>>
>>
>>
>>
>>> On Jan 28, 2015, at 12:53 AM, Roy Golan <rgolan(a)redhat.com
>>> <mailto:rgolan@redhat.com>> wrote:
>>>
>>> On 01/28/2015 03:10 AM, Mikola Rose wrote:
>>>> Hi there,
>>>>
>>>>
>>>> I seem to have run into a problem. my hosted engine vm is failing
>>>> to start;
>>>>
>>>> vdsm.log;
>>>>
>>>>
>>>> Thread-20::DEBUG::2015-01-27
>>>> 16:53:37,999::fileSD::152::Storage.StorageDomain::(__init__) Reading
>>>> domain in path
>>>>
/rhev/data-center/mnt/192.168.1.32:_Volumes_Raid1/443b4931-667f-441f-98d8-51384e67a0af
>>>> Thread-20::ERROR::2015-01-27
>>>>
16:53:38,000::domainMonitor::256::Storage.DomainMonitorThread::(_monitorDomain)
>>>> Error while collecting domain 443b4931-667f-441f-98d8-51384e67a0af
>>>> monitoring information
>>>> Traceback (most recent call last):
>>>> File "/usr/share/vdsm/storage/domainMonitor.py", line 221,
in
>>>> _monitorDomain
>>>> self.domain = sdCache.produce(self.sdUUID)
>>>> File "/usr/share/vdsm/storage/sdc.py", line 98, in produce
>>>> domain.getRealDomain()
>>>> File "/usr/share/vdsm/storage/sdc.py", line 52, in
getRealDomain
>>>> return self._cache._realProduce(self._sdUUID)
>>>> File "/usr/share/vdsm/storage/sdc.py", line 122, in
_realProduce
>>>> domain = self._findDomain(sdUUID)
>>>> File "/usr/share/vdsm/storage/sdc.py", line 141, in
_findDomain
>>>> dom = findMethod(sdUUID)
>>>> File "/usr/share/vdsm/storage/nfsSD.py", line 122, in
findDomain
>>>> return NfsStorageDomain(NfsStorageDomain.findDomainPath(sdUUID))
>>>> File "/usr/share/vdsm/storage/fileSD.py", line 159, in
__init__
>>>> validateFileSystemFeatures(sdUUID, self.mountpoint)
>>>> File "/usr/share/vdsm/storage/fileSD.py", line 88, in
>>>> validateFileSystemFeatures
>>>> oop.getProcessPool(sdUUID).directTouch(testFilePath)
>>>> File "/usr/share/vdsm/storage/outOfProcess.py", line 320, in
>>>> directTouch
>>>> ioproc.touch(path, flags, mode)
>>>> File
"/usr/lib/python2.6/site-packages/ioprocess/__init__.py",
>>>> line 507, in touch
>>>> self.timeout)
>>>> File
"/usr/lib/python2.6/site-packages/ioprocess/__init__.py",
>>>> line 391, in _sendCommand
>>>> raise OSError(errcode, errstr)
>>>> OSError: [Errno 13] Permission denied
>>>>
>>>> I assume this is an nfs issue so I checked to see if I could create
>>>> a file in the mounts from the host machine, which I could.
>>>>
>>>> My test bed is usign an old OS X NFS via xraid adn the export is ;
>>>> /Volumes/Raid1 -maproot=root:wheel -network 192.168.1.0 -mask
>>>> 255.255.255.0
>>>>
>>>> drwxr-xr-x 6 vdsm kvm 204 Jan 27 16:30
>>>> 443b4931-667f-441f-98d8-51384e67a0af
>>>> drwxr-xr-x 4 vdsm kvm 136 Jan 27 12:32 data
>>>> drwxr-xr-x 4 vdsm kvm 136 Jan 27 00:18 iso
>>>> drwxr-xr-x 3 vdsm kvm 102 Jan 26 09:43 vm
>>>>
>>>> the host machine seems to mount the shares fine....
>>>>
>>>> drwxr-xr-x 5 vdsm kvm 4096 Jan 27 12:28 .
>>>> drwxr-xr-x 3 vdsm kvm 4096 Jan 27 10:04 ..
>>>> drwxr-xr-x 13 vdsm kvm 544 Jan 27 00:12 192.168.1.32:_Volumes_Raid1
>>>> drwxr-xr-x 2 vdsm kvm 4096 Jan 27 00:12
>>>> 192.168.1.32:_Volumes_Raid1_data
>>>> drwxr-xr-x 2 vdsm kvm 4096 Jan 27 00:24 192.168.1.32:_Volumes_Raid1_iso
>>>>
>>>> and as I said above I can create files in any one of those mounts
>>>>
>>>>
>>>> Is there a place I can look at to find the offending file? If that
>>>> is the issue... Oddly enough everything worked until i rebooted, so
>>>> I must have either changed something or something is buggered.
>>>>
>>>
>>> please echo the output of
>>>
>>> find /Volumes/Raid1 -name "__DIRECT_IO_TEST__" | xargs ls -la
>>>
>>> the failure is to create this file.
>>>
>>>>
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> Users mailing list
>>>> Users(a)ovirt.org
>>>>
http://lists.ovirt.org/mailman/listinfo/users
>>>
>>
>> Mik Rose | Manager, IT - Support Services | *PowerSoft Development Corp*
>> 1 (250) 642-0295 x23
http://www.power-soft.com
>> <
http://www.power-soft.com/> Live Support
>> <
https://secure.logmeinrescue.com/Customer/Download.aspx?EntryID=15095831>
>> This e-mail may be privileged and/or confidential, and the sender does
>> not waive
>> any related rights and obligations. Any distribution, use or copying
>> of this e-mail or the information
>> it contains by other than an intended recipient is unauthorized.
>> If you received this e-mail in error, please advise me (by return
>> e-mail or otherwise) immediately.
>>
>>
>>
>>
>>
>>
>
>
--
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at