[Users] Export Domain & Upgrade
by Nicholas Kesick
--_272d8d1f-e015-441a-98fc-e40fecdaa7c2_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
This is the first time I've run through an 'upgrade'=2C so I'm very new to =
export domains=2C and I'm having some trouble getting one connected to 3.3.=
I had hoped to add this to the wiki=2C but it hasn't been as straightforwa=
rd as I thought.
On my oVirt 3.2 install=2C I created a NFS export (/var/lib/exports/DAONE) =
and exported my VMs to it. I created a tarball of everything in DAONE=2C an=
d then formatted the system and installed Fedora 19=2C and then oVirt 3.3. =
Created the NFS resource (/var/lib/exports/DAONE)=2C extracted the tarball=
=2C and followed these directions to clear the storage domain. However when=
I try to add it=2C the webadmin reports the following error:
"Error while executing action New NFS Storage Domain: Error in creating a=
=0A=
Storage Domain. The selected storage path is not empty (probably =0A=
contains another Storage Domain). Either remove the existing Storage =0A=
Domain from this path=2C or change the Storage path)."
Any suggestions? I wonder if anything changed for 3.3 that need to be in th=
e instructions?
I also tried making another export domain (/var/lib/exports/export) and dum=
ping everything into the UUID under that=2C but no VMs showed up to import.
#showmount -e f19-ovirt.mkesick.net
Export list for f19-ovirt.mkesick.net:
/var/lib/exports/storage 0.0.0.0/0.0.0.0
/var/lib/exports/iso 0.0.0.0/0.0.0.0
/var/lib/exports/export 0.0.0.0/0.0.0.0
/var/lib/exports/DAONE 0.0.0.0/0.0.0.0
[root@f19-ovirt dom_md]# cat metadata
CLASS=3DBackup
DESCRIPTION=3DDaOne
IOOPTIMEOUTSEC=3D1
LEASERETRIES=3D3
LEASETIMESEC=3D5
LOCKPOLICY=3D
LOCKRENEWALINTERVALSEC=3D5
MASTER_VERSION=3D0
POOL_UUID=3D
REMOTE_PATH=3Dlocalhost:/var/lib/exports/DAONE
ROLE=3DRegular
SDUUID=3D8e4f6fbd-b635-4f47-b113-ba146ee1c0cf
TYPE=3DNFS
VERSION=3D0
=
--_272d8d1f-e015-441a-98fc-e40fecdaa7c2_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
<html>
<head>
<style><!--
.hmmessage P
{
margin:0px=3B
padding:0px
}
body.hmmessage
{
font-size: 12pt=3B
font-family:Calibri
}
--></style></head>
<body class=3D'hmmessage'><div dir=3D'ltr'>This is the first time I've run =
through an 'upgrade'=2C so I'm very new to export domains=2C and I'm having=
some trouble getting one connected to 3.3. I had hoped to add this to the =
wiki=2C but it hasn't been as straightforward as I thought.<br><br>On my oV=
irt 3.2 install=2C I created a NFS export (/var/lib/exports/DAONE) and expo=
rted my VMs to it. I created a tarball of everything in DAONE=2C and then f=
ormatted the system and installed Fedora 19=2C and then oVirt 3.3. Created =
the NFS resource (/var/lib/exports/DAONE)=2C extracted the tarball=2C and <=
a href=3D"http://www.ovirt.org/How_to_clear_the_storage_domain_pool_config_=
of_an_exported_nfs_domain">followed these directions</a> to clear the stora=
ge domain. However when I try to add it=2C the webadmin reports the followi=
ng error:<br><br>"Error while executing action New NFS Storage Domain: Erro=
r in creating a=0A=
Storage Domain. The selected storage path is not empty (probably =0A=
contains another Storage Domain). Either remove the existing Storage =0A=
Domain from this path=2C or change the Storage path)."<br><br>Any suggestio=
ns? I wonder if anything changed for 3.3 that need to be in the instruction=
s?<br><br>I also tried making another export domain (/var/lib/exports/expor=
t) and dumping everything into the UUID under that=2C but no VMs showed up =
to import.<br><br>#showmount -e f19-ovirt.mkesick.net<br>Export list for f1=
9-ovirt.mkesick.net:<br>/var/lib/exports/storage 0.0.0.0/0.0.0.0<br>/var/li=
b/exports/iso =3B =3B =3B =3B 0.0.0.0/0.0.0.0<br>/var/lib/e=
xports/export =3B 0.0.0.0/0.0.0.0<br>/var/lib/exports/DAONE =3B&nbs=
p=3B 0.0.0.0/0.0.0.0<br><br>[root@f19-ovirt dom_md]# cat metadata<br>CLASS=
=3DBackup<br>DESCRIPTION=3DDaOne<br>IOOPTIMEOUTSEC=3D1<br>LEASERETRIES=3D3<=
br>LEASETIMESEC=3D5<br>LOCKPOLICY=3D<br>LOCKRENEWALINTERVALSEC=3D5<br>MASTE=
R_VERSION=3D0<br>POOL_UUID=3D<br>REMOTE_PATH=3Dlocalhost:/var/lib/exports/D=
AONE<br>ROLE=3DRegular<br>SDUUID=3D8e4f6fbd-b635-4f47-b113-ba146ee1c0cf<br>=
TYPE=3DNFS<br>VERSION=3D0<br><br> </div></body>
</html>=
--_272d8d1f-e015-441a-98fc-e40fecdaa7c2_--
11 years
Re: [Users] could not add iscsi disk in ovirt.
by Dafna Ron
you don't have to reconfigure engine to create iscsi pool.
you can create multiple pools, the only limitation is that you will need
a host for the each of the DCs.
but you can move one host from one DC to another...
so if you have 2 hosts just create a new DC and move one of the hosts
there.
I think that the direct lun is only for iscsi and fiber but I can give
you a 100% certainty answer on Monday.
I don't think its supported for NFS... I can check for you on Monday for
sure.
On 09/27/2013 03:28 PM, Saurabh wrote:
> On 27/09/13 19:52, Dafna Ron wrote:
>> wait :)
>>
>> the first screen shot was of local storage and internal disk type -
>> not related at all to the direct lun feature.
>>
>> the second screen shot is for iscsi storage pool.
>> it shows a current connected lun that you have.
>>
>> try this:
>>
>> create a new target lun.
>> log in to the webadmin -> disks tab -> new disk
>> select the Default DC (which is your iscsi storage type) -> expend
>> the "discover tartest"
>> put in your storage server (where you created the lun) -> discover
>> you should than see a list of targets -> log in
>> select the target and press OK.
>>
>>
>> On 09/27/2013 03:11 PM, Saurabh wrote:
>>> On 27/09/13 19:14, Dafna Ron wrote:
>>>> the output is empty in both your outpit and the logs.
>>>>
>>>> This is the part in the log that shows that you do not have any
>>>> devices: return: []
>>>>
>>>> when you create a new disk and select the NFS DC- can you please
>>>> send me the screen shot of that?
>>>>
>>>>
>>>> On 09/27/2013 02:41 PM, Saurabh wrote:
>>>>> On 27/09/13 18:40, Dafna Ron wrote:
>>>>>> I am not sure direct lun will be supported for an NFS pool... can
>>>>>> you try the same from iscsi pool?
>>>>>>
>>>>>> is this lun being used somewhere else or did you create a new one?
>>>>>> you can run vdsClient -s 0 getDeviceList to see the list of the
>>>>>> available luns and if the status is free or not.
>>>>>> (it should also in the logs (engine and vdsm) when you discover
>>>>>> the luns from the webadmin.
>>>>>>
>>>>>>
>>>>>> On 09/27/2013 01:43 PM, Saurabh wrote:
>>>>>>>
>>>>>>> Hi guys, I am running ovirt with NFS as storage pool. Everything
>>>>>>> is working quite fine for me. The only problem is I could not
>>>>>>> add the a new iscsi disk in Disk tab. When I go to the Disk tab
>>>>>>> there is an option of adding new disk from internal storage and
>>>>>>> another option is add External (Direct Lun). When I opt for the
>>>>>>> External (Direct LUN), I am able to discover the Lun but could
>>>>>>> not log in to that lun using the ovirt web console. Whereas when
>>>>>>> I try to login to that lun using the isciadm in command line I
>>>>>>> am able to login. Any help??
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Users mailing list
>>>>>>> Users(a)ovirt.org
>>>>>>> http://lists.ovirt.org/mailman/listinfo/users
>>>>>>
>>>>>>
>>>>> I have created a new Lun for this purpose only and it is not being
>>>>> used anywhere.
>>>>> In order to try an iscsi pool I will have to reconfigure the
>>>>> engine, because during the engine setup I opted for nfs.
>>>>>
>>>>> This is the output when I run this command.
>>>>>
>>>>> [root@ovirt ~]# vdsClient -s 0 getDeviceList
>>>>> []
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> this is log snippet from engine.log when I discover and try to
>>>>> login to that lun.
>>>>>
>>>>>
>>>>> #################################################################################################
>>>>>
>>>>> 013-09-27 14:34:55,592 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand]
>>>>> (ajp--127.0.0.1-8702-11) START,
>>>>> DiscoverSendTargetsVDSCommand(HostName = 192.168.100.59, HostId =
>>>>> 168842b9-071b-4c64-a22d-07ae1e63830b, connection={ id: null,
>>>>> connection: 192.168.100.160, iqn: null, vfsType: null,
>>>>> mountOptions: null, nfsVersion: null, nfsRetrans: null, nfsTimeo:
>>>>> null };), log id: 2e12b97
>>>>> 2013-09-27 14:34:55,653 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand]
>>>>> (ajp--127.0.0.1-8702-11) FINISH, DiscoverSendTargetsVDSCommand,
>>>>> return: [{ id: null, connection: 192.168.100.160, iqn:
>>>>> iqn.2013.26.ovirt.techblue.lv001, vfsType: null, mountOptions:
>>>>> null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };], log
>>>>> id: 2e12b97
>>>>> 2013-09-27 14:36:25,877 INFO
>>>>> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
>>>>> (ajp--127.0.0.1-8702-2) Running command:
>>>>> ConnectStorageToVdsCommand internal: false. Entities affected :
>>>>> ID: aaa00000-0000-0000-0000-123456789aaa Type: System
>>>>> 2013-09-27 14:36:25,880 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>>> (ajp--127.0.0.1-8702-2) START,
>>>>> ConnectStorageServerVDSCommand(HostName = 192.168.100.59, HostId =
>>>>> 168842b9-071b-4c64-a22d-07ae1e63830b, storagePoolId =
>>>>> 00000000-0000-0000-0000-000000000000, storageType = ISCSI,
>>>>> connectionList = [{ id: null, connection: 192.168.100.160, iqn:
>>>>> iqn.2013.26.ovirt.techblue.lv001, vfsType: null, mountOptions:
>>>>> null, nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log
>>>>> id: 5fca13a4
>>>>> 2013-09-27 14:36:26,012 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
>>>>> (ajp--127.0.0.1-8702-2) FINISH, ConnectStorageServerVDSCommand,
>>>>> return: {00000000-0000-0000-0000-000000000000=0}, log id: 5fca13a4
>>>>> 2013-09-27 14:36:26,098 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (ajp--127.0.0.1-8702-12)
>>>>> START, GetDeviceListVDSCommand(HostName = 192.168.100.59, HostId =
>>>>> 168842b9-071b-4c64-a22d-07ae1e63830b, storageType=ISCSI), log id:
>>>>> 20fb893b
>>>>> 2013-09-27 14:36:29,376 INFO
>>>>> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand] (ajp--127.0.0.1-8702-12)
>>>>> FINISH, GetDeviceListVDSCommand, return: [], log id: 20fb893b
>>>>>
>>>>> ######################################################################################################
>>>>>
>>>>>
>>>>>
>>>>>
>>>>
>>>>
>>> Here I am sending you the two screenshots, one is when I am adding
>>> the disk from the Local-storage that is NFS, and another is the
>>> Direct lun.
>>> You can see that I am able to discover it but could not log into it.
>>>
>>
>>
> ok so there is no means we can use an iscsi disk in a Data center
> whoose default storage is NFS?
--
Dafna Ron
11 years
[Users] oVirt 3.3 and Neutron
by Riccardo Brunetti
Dear oVirt users.
I'm trying to setup oVirt 3.3 using an OpenStack Neutron existing
service as network provider.
It works pretty well: I can use the networks defined in Neutron, import
them and when I launch VM instances from oVirt they get an internal-IP
address from Neutron.
A can also associate a floating IP to the Neutron port and get inbound
connectivity for the virtual machine.
The problem is that if I shutdown the VM (both from inside the VM itself
and from the oVirt WEB GUI) the association port/internal-IP/floating-IP
is lost and when the VM is booted again It will get a different
internal-IP on a different port and I have to manually re-associate the
floating-IP.
Is there a way to keep the IP addresses when the VM is simply shutted
down and not deleted? This is the behavior when using OpenStack: if I
poweroff the VM, the IPs are kept for the future.
Moreover: can you confirm that in oVirt 3.3 there is still no support
for the Neutron security rules?
Thank you very much
Best Regards
Riccardo
11 years
[Users] Bottleneck writing to a VM w/ mounted GlusterFS
by Stefano Stagnaro
Hello,
I'm testing oVirt 3.3 with GlusterFS libgfapi back-end. I'm using a node for engine and one for VDSM. From the VMs I'm mounting a second GlusterFS volume on a third storage server.
I'm experiencing very bad transfer rates (38MB/s) writing from a client to a VM on the mounted GlusterFS. On the other hand, from the VM itself I can move a big file from the root vda (libgfapi) to the mounted GlusterFS at 70MB/s.
I can't really figure out where the bottleneck could be. I'm using only the default ovirtmgmt network.
Thank you for your help, any hint will be appreciated.
Regards,
--
Stefano Stagnaro
IT Manager
Prisma Engineering S.r.l.
Via Petrocchi, 4
20127 Milano – Italy
Tel. 02 26113507 int 339
e-mail: stefanos(a)prisma-eng.com
skype: stefano.stagnaro
11 years
[Users] Gluster NFS Replicate bricks different size
by Andrew Lau
I've mounted a gluster 1x2 replica through NFS in oVirt. The NFS share
holds the qcow images of the VMs.
I recently nuked a whole replica brick in an 1x2 array (for numerous other
reasons including split-brain), the brick self healed and restored back to
the same state as its partner.
4 days later, they've become inbalanced. The direct `du` of the /brick are
showing different sizes by around 20GB. I can see at brick level how some
images are not the same size. I don't think this is normal, but I can't see
anything to point what could be the issue.
gluster volume heal STORAGE info
gluster volume heal STORAGE info split-brain
Shows no issues.
Any suggestions?
Cheers,
Andrew.
11 years
Re: [Users] could not add iscsi disk in ovirt.
by Dafna Ron
the output is empty in both your outpit and the logs.
This is the part in the log that shows that you do not have any devices:
return: []
when you create a new disk and select the NFS DC- can you please send me
the screen shot of that?
On 09/27/2013 02:41 PM, Saurabh wrote:
> On 27/09/13 18:40, Dafna Ron wrote:
>> I am not sure direct lun will be supported for an NFS pool... can you
>> try the same from iscsi pool?
>>
>> is this lun being used somewhere else or did you create a new one?
>> you can run vdsClient -s 0 getDeviceList to see the list of the
>> available luns and if the status is free or not.
>> (it should also in the logs (engine and vdsm) when you discover the
>> luns from the webadmin.
>>
>>
>> On 09/27/2013 01:43 PM, Saurabh wrote:
>>>
>>> Hi guys, I am running ovirt with NFS as storage pool. Everything is
>>> working quite fine for me. The only problem is I could not add the a
>>> new iscsi disk in Disk tab. When I go to the Disk tab there is an
>>> option of adding new disk from internal storage and another option
>>> is add External (Direct Lun). When I opt for the External (Direct
>>> LUN), I am able to discover the Lun but could not log in to that lun
>>> using the ovirt web console. Whereas when I try to login to that lun
>>> using the isciadm in command line I am able to login. Any help??
>>>
>>> _______________________________________________
>>> Users mailing list
>>> Users(a)ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
> I have created a new Lun for this purpose only and it is not being
> used anywhere.
> In order to try an iscsi pool I will have to reconfigure the engine,
> because during the engine setup I opted for nfs.
>
> This is the output when I run this command.
>
> [root@ovirt ~]# vdsClient -s 0 getDeviceList
> []
>
>
>
>
>
> this is log snippet from engine.log when I discover and try to login
> to that lun.
>
>
> #################################################################################################
>
> 013-09-27 14:34:55,592 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand]
> (ajp--127.0.0.1-8702-11) START, DiscoverSendTargetsVDSCommand(HostName
> = 192.168.100.59, HostId = 168842b9-071b-4c64-a22d-07ae1e63830b,
> connection={ id: null, connection: 192.168.100.160, iqn: null,
> vfsType: null, mountOptions: null, nfsVersion: null, nfsRetrans: null,
> nfsTimeo: null };), log id: 2e12b97
> 2013-09-27 14:34:55,653 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.DiscoverSendTargetsVDSCommand]
> (ajp--127.0.0.1-8702-11) FINISH, DiscoverSendTargetsVDSCommand,
> return: [{ id: null, connection: 192.168.100.160, iqn:
> iqn.2013.26.ovirt.techblue.lv001, vfsType: null, mountOptions: null,
> nfsVersion: null, nfsRetrans: null, nfsTimeo: null };], log id: 2e12b97
> 2013-09-27 14:36:25,877 INFO
> [org.ovirt.engine.core.bll.storage.ConnectStorageToVdsCommand]
> (ajp--127.0.0.1-8702-2) Running command: ConnectStorageToVdsCommand
> internal: false. Entities affected : ID:
> aaa00000-0000-0000-0000-123456789aaa Type: System
> 2013-09-27 14:36:25,880 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (ajp--127.0.0.1-8702-2) START, ConnectStorageServerVDSCommand(HostName
> = 192.168.100.59, HostId = 168842b9-071b-4c64-a22d-07ae1e63830b,
> storagePoolId = 00000000-0000-0000-0000-000000000000, storageType =
> ISCSI, connectionList = [{ id: null, connection: 192.168.100.160, iqn:
> iqn.2013.26.ovirt.techblue.lv001, vfsType: null, mountOptions: null,
> nfsVersion: null, nfsRetrans: null, nfsTimeo: null };]), log id: 5fca13a4
> 2013-09-27 14:36:26,012 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStorageServerVDSCommand]
> (ajp--127.0.0.1-8702-2) FINISH, ConnectStorageServerVDSCommand,
> return: {00000000-0000-0000-0000-000000000000=0}, log id: 5fca13a4
> 2013-09-27 14:36:26,098 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
> (ajp--127.0.0.1-8702-12) START, GetDeviceListVDSCommand(HostName =
> 192.168.100.59, HostId = 168842b9-071b-4c64-a22d-07ae1e63830b,
> storageType=ISCSI), log id: 20fb893b
> 2013-09-27 14:36:29,376 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.GetDeviceListVDSCommand]
> (ajp--127.0.0.1-8702-12) FINISH, GetDeviceListVDSCommand, return: [],
> log id: 20fb893b
>
> ######################################################################################################
>
>
>
>
--
Dafna Ron
11 years
[Users] could not add iscsi disk in ovirt.
by Saurabh
Hi guys,
I am running ovirt with NFS as storage pool. Everything is working quite
fine for me. The only problem is I could not add the a new iscsi disk in
Disk tab.
When I go to the Disk tab there is an option of adding new disk from
internal storage and another option is add External (Direct Lun). When I
opt for the External (Direct LUN), I am able to discover the Lun but
could not log in to that lun using the ovirt web console. Whereas when I
try to login to that lun using the isciadm in command line I am able to
login.
Any help??
11 years
[Users] vdsm live migration errors in latest master
by Dead Horse
Seeing failed live migrations and these errors in the vdsm logs with latest
VDSM/Engine master.
Hosts are EL6.4
Thread-1306::ERROR::2013-09-23
16:02:42,422::BindingXMLRPC::993::vds::(wrapper) unexpected error
Traceback (most recent call last):
File "/usr/share/vdsm/BindingXMLRPC.py", line 979, in wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/BindingXMLRPC.py", line 211, in vmDestroy
return vm.destroy()
File "/usr/share/vdsm/API.py", line 323, in destroy
res = v.destroy()
File "/usr/share/vdsm/vm.py", line 4326, in destroy
response = self.releaseVm()
File "/usr/share/vdsm/vm.py", line 4292, in releaseVm
self._cleanup()
File "/usr/share/vdsm/vm.py", line 2750, in _cleanup
self._cleanupDrives()
File "/usr/share/vdsm/vm.py", line 2482, in _cleanupDrives
drive, exc_info=True)
File "/usr/lib64/python2.6/logging/__init__.py", line 1329, in error
self.logger.error(msg, *args, **kwargs)
File "/usr/lib64/python2.6/logging/__init__.py", line 1082, in error
self._log(ERROR, msg, args, **kwargs)
File "/usr/lib64/python2.6/logging/__init__.py", line 1082, in error
self._log(ERROR, msg, args, **kwargs)
File "/usr/lib64/python2.6/logging/__init__.py", line 1173, in _log
self.handle(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 1183, in handle
self.callHandlers(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 1220, in
callHandlers
hdlr.handle(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 679, in handle
self.emit(record)
File "/usr/lib64/python2.6/logging/handlers.py", line 780, in emit
msg = self.format(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 654, in format
return fmt.format(record)
File "/usr/lib64/python2.6/logging/__init__.py", line 436, in format
record.message = record.getMessage()
File "/usr/lib64/python2.6/logging/__init__.py", line 306, in getMessage
msg = msg % self.args
File "/usr/share/vdsm/vm.py", line 107, in __str__
if not a.startswith('__')]
File "/usr/share/vdsm/vm.py", line 1344, in hasVolumeLeases
if self.shared != DRIVE_SHARED_TYPE.EXCLUSIVE:
AttributeError: 'Drive' object has no attribute 'shared'
- DHC
11 years