iSCSI Discovery cannot detetect LUN
by Lukáš Kaplan
Hello all,
please do you have some experience with troubleshooting adding of iSCSI
domain to ovirt 4.1.1?
I am chalenging this issue now:
1) I have successfuly installed oVirt 4.1.1 environment with self-hosted
engine, 3 nodes and 3 storages (iSCSI Master domain, iSCSI for hosted
engine and NFS ISO domain). Everything is working now.
2) But, when I want to add new iSCSI domain, I can discover it, I can
login, but I cant see any LUN on that storage. (I had same problem in oVirt
4.1.0, so I made upgrade to 4.1.1)
3) Then I tryed to add this storage to another oVirt environment (oVirt
3.6) and there are no problem. I can see LUN on that storage and I can
connect it to oVirt.
I tryed to examine vdsm.log, but it is very detailed and unredable for me
:-/
Thak you in advance, have a nice day,
--
Lukas Kaplan
7 years, 8 months
Libvirtd Segfault
by Aaron West
Hi Guys,
I've been having some issues this week with my oVirt 4.1 install on CentOS
7...
It was previously working great but I made a couple of alterations(which
I've mostly backed out now) and I ran a yum update followed by a reboot,
however. after it came back up I couldn't start any virtual machines...
The oVirt web interface reports :
VDSM Local command SpmStatusVDS failed: Heartbeat exceeded
VDSM Local command GetStatsVDS failed: Heartbeat exceeded
VM My_Workstation is down with error. Exit message: Failed to find the
libvirt domain.
Plus some other errors so I checked my "dmesg" output and libvirtd seems to
segfault :
[70835.914607] libvirtd[10091]: segfault at 0 ip 00007f681c7e7721 sp
00007f680c7d2740 error 4 in libvirt.so.0.2000.0[7f681c73a000+353000]
Next I checked logs but I couldn't find anything that seemed relevant to
me, I mean I see lots of oVirt complaining but that makes sense if libvirtd
segfaults right?
So last ditch effort was try and use strace and hope with my limited
knowlege I spot something uselful but I didn't so I've attahced the strace
output to this email in the hope someone else might.
7 years, 8 months
Self-Hosted install on fresh centos 7
by Cory Taylor
I am having some difficulty setting up a self hosted engine on a new
install of centos 7, and would appreciate any help.
upon hosted-engine --deploy, I get this:
[ ERROR ] Failed to execute stage 'Misc configuration': [Errno 1] Operation
not permitted:
'/var/run/vdsm/storage/39015a62-0f8f-4b73-998e-5a4923b060f0/9d1ebe53-9997-46fe-a39f-aff5768eae59'
the relevant error in vdsm.log:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 878,
in _run
return fn(*args, **kargs)
File "/usr/lib/python2.7/site-packages/vdsm/logUtils.py", line 52, in
wrapper
res = f(*args, **kwargs)
File "/usr/share/vdsm/storage/hsm.py", line 3145, in prepareImage
raise se.VolumeDoesNotExist(leafUUID)
VolumeDoesNotExist: Volume does not exist:
(u'b5b2412a-f825-43f2-b923-191216117d25',)
2017-03-24 11:19:17,572-0400 INFO (jsonrpc/0) [storage.TaskManager.Task]
(Task='82952b56-0894-4a25-b14c-1f277d20d30a') aborting: Task is aborted:
'Volume does not exist' - code 201 (task:1176)
2017-03-24 11:19:17,573-0400 ERROR (jsonrpc/0) [storage.Dispatcher]
{'status': {'message': "Volume does not exist:
(u'b5b2412a-f825-43f2-b923-191216117d25',)", 'code': 201}} (dispatcher:78)
2017-03-24 11:19:17,573-0400 INFO (jsonrpc/0) [jsonrpc.JsonRpcServer] RPC
call Image.prepare failed (error 201) in 0.01 seconds (__init__:552)
2017-03-24 11:19:17,583-0400 INFO (jsonrpc/2) [dispatcher] Run and
protect: createVolume(sdUUID=u'39015a62-0f8f-4b73-998e-5a4923b060f0',
spUUID=u'53edef91-404b-45d6-8180-48c59def4e01',
imgUUID=u'bff0948c-0916-49f8-9026-c23a571b8abe', size=u'1048576',
volFormat=5, preallocate=1, diskType=2,
volUUID=u'b5b2412a-f825-43f2-b923-191216117d25',
desc=u'hosted-engine.lockspace',
srcImgUUID=u'00000000-0000-0000-0000-000000000000',
srcVolUUID=u'00000000-0000-0000-0000-000000000000', initialSize=None)
(logUtils:51)
logs:
https://gist.github.com/anonymous/542443c68e5c9ebef9225ec1c358d627
7 years, 8 months
Upgrade from 3.6 to 4.1
by Brett Holcomb
I am currently running oVirt 3.6 on a physical server using hosted
engine environment. I have one server since it's a lab setup. The
storage is on a Synology 3615xs iSCSI LUN so that's where the vms are.
I plan to upgrade to 4.1 and need to check to make sure I understand the
procedure. I've read the oVirt 4.1 Release Notes and they leave some
questions.
First they say I can simply install the 4.1 release repo update all the
ovirt-*-setup* and then run engine-setup.
1. I assume this is on the engine VM running on the host physical box.
2. What does engine-setup do. Does it know what I have and simply
update or do I have to go through setup again.
3. Then do I go to the host and update all the ovirt stuff?
However, they then say for oVirt Hosted Engine follow a link for
upgrading which takes me to a Not Found :( page but did have a link back
to the release notes which link to the Not Found which.... So what do I
need to know about upgrading a hosted engine setup that there are no
directions for. Are there some gotchas? I thought that the release
notes said I just had to upgrade the engine and then the host.
Given that my VMs are on iSCSI what happens if things go bad and I have
to start from scratch. Can I import the VMs created under 3.6 into 4.1
or do I have to do something else like copy them somewhere for backup.
Any other hints and tips are appreciated.
Thanks.
7 years, 8 months
Re: [ovirt-users] Bulk move vm disks?
by nicolas@devels.es
You can use oVirt 4.1 with the ovirt-engine-sdk-python package version
3.x, they are backwards compatible.
Regards.
El 2017-03-24 11:12, gflwqs gflwqs escribió:
> Ok Thank you Nicolas we are using ovirt4.1..
>
> Regards
> Christian
>
> 2017-03-24 12:03 GMT+01:00 <nicolas(a)devels.es>:
>
>> El 2017-03-24 10:29, Ernest Beinrohr escribió:
>> On 24.03.2017 11 [1]:11, gflwqs gflwqs wrote:
>>
>> Hi list,
>> I need to move 600+ vms:from one data domain to another, however
>> from what i can see in the GUI i can only move one vm disk at the
>> time which would be very time consuming.
>>
>> I there any way i can bulk move those vm disks?
>> By the way, I can't stop the vms they have to be online during the
>> migration..
>> This is my python program:
>>
>> # ... API init
>>
>> vms= api.vms.list(query = 'vmname')
>
> If you're planning to do it that way, make sure you install version
> 3.x of ovirt-engine-sdk-python. Newer versions (4.x) differ too much
> in syntax.
>
> Also, if you want to move a whole Storage Domain, you might be
> interested in listing VMs by the storage domain name, i.e.:
>
> api.vms.list(query='Storage=myoldstoragedomain')
>
> That will return a list of all machines in that storage domain.
>
>> for vm in vms:
>> print vm.name [2]
>> for disk in vm.disks.list( ):
>> print " disk: " + disk.name [3] + " " + disk.get_alias()
>> sd = api.storagedomains.get('NEWSTORAGE')
>>
>> try:
>> disk.move(params.Action(storage_domain=sd))
>>
>> disk_id = disk.get_id()
>> while True:
>> print("Waiting for movement to complete ...")
>> time.sleep(10)
>> disk = vm.disks.get(id=disk_id)
>> if disk.get_status().get_state() == "ok":
>> break
>>
>> except:
>> print "Cannot move."
>>
>> api.disconnect()
>>
>> --
>>
>> Ernest Beinrohr, AXON PRO
>> Ing [1], RHCE [2], RHCVA [2], LPIC [3], VCA [4],
>> +421-2-62410360 [4] +421-903-482603 [5]
>>
>> Links:
>> ------
>> [1] http://www.beinrohr.sk/ing.php [6]
>> [2] http://www.beinrohr.sk/rhce.php [7]
>> [3] http://www.beinrohr.sk/lpic.php [8]
>> [4] http://www.beinrohr.sk/vca.php [9]
>>
>> _______________________________________________
>> Users mailing list
>> Users(a)ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users [10]
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users [10]
>
>
>
> Links:
> ------
> [1] tel:24.03.2017%2011
> [2] http://vm.name
> [3] http://disk.name
> [4] tel:%2B421-2-62410360
> [5] tel:%2B421-903-482603
> [6] http://www.beinrohr.sk/ing.php
> [7] http://www.beinrohr.sk/rhce.php
> [8] http://www.beinrohr.sk/lpic.php
> [9] http://www.beinrohr.sk/vca.php
> [10] http://lists.ovirt.org/mailman/listinfo/users
7 years, 8 months
Re: [ovirt-users] Bulk move vm disks?
by Staniforth, Paul
--_000_bb226a5b0a6c4de4a9d19f2ae7c8e530emailandroidcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
SGVsbG8gQ2hyaXN0aWFuLA0KICAgICAgICAgICAgICAgICAgICAgICAgICAgSSBoYXZlIHJlY2Vu
dGx5IG1vdmVkIGFyb3VuZCA3MDAgVk0gZGlza3MgYmV0d2VlbiBzdG9yYWdlIGRvbWFpbnMsIHlv
dSBjYW4gc2VsZWN0IG11bHRpcGxlIGRpc2tzIGluIHRoZSBHVUkgYW5kIG1vdmUgdGhlbS4gSSBk
aWQgdGhpcyBvbiBvVmlydCAzLjYgbW9zdCBvZiB0aGVzZSB3ZXJlIGRlcGVuZGFudCBvbiB0ZW1w
bGF0ZSBkaXNrcyBzbyBJIGhhZCB0byBjb3B5IHRoZSB0ZW1wbGF0ZSBkaXNrcyB0byB0aGUgZGVz
dGluYXRpb24gZG9tYWluIG9uY2UgYWxsIHRoZSBkZXBlbmRhbnQgVk0gZGlza3Mgd2VyZSBtb3Zl
ZCBJIGNvdWxkIHJlbW92ZSB0aGUgdGVtcGxhdGUgZGlzayBmcm9tIHRoZSBzb3VyY2UgZG9tYWlu
Lg0KSWYgdGhlIFZNcyBhcmUgdXAgaXQgYXV0b21hdGljYWxseSBjcmVhdGVzIGEgc25hcHNob3Qs
IGluIHZlcnNpb24gMy42IHRoZXNlIGFyZW4ndCBhdXRvbWF0aWNhbGx5IHJlbW92ZWQuDQpSZWdh
cmRzLA0KICAgICAgICAgICAgICAgICBQYXVsIFMuDQoNCk9uIDI0IE1hciAyMDE3IDEwOjEyLCBn
Zmx3cXMgZ2Zsd3FzIDxnZmx3cXNAZ21haWwuY29tPiB3cm90ZToNCkhpIGxpc3QsDQpJIG5lZWQg
dG8gbW92ZSA2MDArIHZtczpmcm9tIG9uZSBkYXRhIGRvbWFpbiB0byBhbm90aGVyLCBob3dldmVy
IGZyb20gd2hhdCBpIGNhbiBzZWUgaW4gdGhlIEdVSSBpIGNhbiBvbmx5IG1vdmUgb25lIHZtIGRp
c2sgYXQgdGhlIHRpbWUgd2hpY2ggd291bGQgYmUgdmVyeSB0aW1lIGNvbnN1bWluZy4NCg0KSSB0
aGVyZSBhbnkgd2F5IGkgY2FuIGJ1bGsgbW92ZSB0aG9zZSB2bSBkaXNrcz8NCkJ5IHRoZSB3YXks
IEkgY2FuJ3Qgc3RvcCB0aGUgdm1zIHRoZXkgaGF2ZSB0byBiZSBvbmxpbmUgZHVyaW5nIHRoZSBt
aWdyYXRpb24uLg0KDQpSZWdhcmRzDQpDaHJpc3RpYW4NCg0KVG8gdmlldyB0aGUgdGVybXMgdW5k
ZXIgd2hpY2ggdGhpcyBlbWFpbCBpcyBkaXN0cmlidXRlZCwgcGxlYXNlIGdvIHRvOi0NCmh0dHA6
Ly9kaXNjbGFpbWVyLmxlZWRzYmVja2V0dC5hYy51ay9kaXNjbGFpbWVyL2Rpc2NsYWltZXIuaHRt
bA0K
--_000_bb226a5b0a6c4de4a9d19f2ae7c8e530emailandroidcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <EF597DA795ACE048AEBAE6225E7E8146(a)leedsbeckett.ac.uk>
Content-Transfer-Encoding: base64
PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5Pg0KPGRpdiBkaXI9ImF1
dG8iPg0KPGRpdj5IZWxsbyBDaHJpc3RpYW4sPC9kaXY+DQo8ZGl2IGRpcj0iYXV0byI+Jm5ic3A7
ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsg
Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwO0kgaGF2ZSByZWNlbnRseSBtb3ZlZCBh
cm91bmQgNzAwIFZNIGRpc2tzIGJldHdlZW4gc3RvcmFnZSBkb21haW5zLCB5b3UgY2FuIHNlbGVj
dCBtdWx0aXBsZSBkaXNrcyBpbiB0aGUgR1VJIGFuZCBtb3ZlIHRoZW0uIEkgZGlkIHRoaXMgb24g
b1ZpcnQgMy42IG1vc3Qgb2YgdGhlc2Ugd2VyZSBkZXBlbmRhbnQgb24gdGVtcGxhdGUgZGlza3Mg
c28gSSBoYWQgdG8gY29weSB0aGUNCiB0ZW1wbGF0ZSBkaXNrcyB0byB0aGUgZGVzdGluYXRpb24g
ZG9tYWluIG9uY2UgYWxsIHRoZSBkZXBlbmRhbnQgVk0gZGlza3Mgd2VyZSBtb3ZlZCBJIGNvdWxk
IHJlbW92ZSB0aGUgdGVtcGxhdGUgZGlzayBmcm9tIHRoZSBzb3VyY2UgZG9tYWluLjwvZGl2Pg0K
PGRpdiBkaXI9ImF1dG8iPklmIHRoZSBWTXMgYXJlIHVwIGl0IGF1dG9tYXRpY2FsbHkgY3JlYXRl
cyBhIHNuYXBzaG90LCBpbiB2ZXJzaW9uIDMuNiB0aGVzZSBhcmVuJ3QgYXV0b21hdGljYWxseSBy
ZW1vdmVkLjwvZGl2Pg0KPGRpdiBkaXI9ImF1dG8iPlJlZ2FyZHMsPC9kaXY+DQo8ZGl2IGRpcj0i
YXV0byI+Jm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZuYnNwOyAmbmJzcDsgJm5ic3A7ICZu
YnNwOyAmbmJzcDtQYXVsIFMuPGJyPg0KPGRpdiBjbGFzcz0iZ21haWxfZXh0cmEiIGRpcj0iYXV0
byI+PGJyPg0KPGRpdiBjbGFzcz0iZ21haWxfcXVvdGUiPk9uIDI0IE1hciAyMDE3IDEwOjEyLCBn
Zmx3cXMgZ2Zsd3FzICZsdDtnZmx3cXNAZ21haWwuY29tJmd0OyB3cm90ZTo8YnIgdHlwZT0iYXR0
cmlidXRpb24iPg0KPGJsb2NrcXVvdGUgY2xhc3M9InF1b3RlIiBzdHlsZT0ibWFyZ2luOjAgMCAw
IC44ZXg7Ym9yZGVyLWxlZnQ6MXB4ICNjY2Mgc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8ZGl2
Pg0KPGRpdiBkaXI9Imx0ciI+SGkgbGlzdCwNCjxkaXY+SSBuZWVkIHRvIG1vdmUgNjAwJiM0Mzsg
dm1zOmZyb20gb25lIGRhdGEgZG9tYWluIHRvIGFub3RoZXIsIGhvd2V2ZXIgZnJvbSB3aGF0IGkg
Y2FuIHNlZSBpbiB0aGUgR1VJIGkgY2FuIG9ubHkgbW92ZSBvbmUgdm0gZGlzayBhdCB0aGUgdGlt
ZSB3aGljaCB3b3VsZCBiZSB2ZXJ5IHRpbWUgY29uc3VtaW5nLjwvZGl2Pg0KPGRpdj48YnI+DQo8
L2Rpdj4NCjxkaXY+SSB0aGVyZSBhbnkgd2F5IGkgY2FuIGJ1bGsgbW92ZSB0aG9zZSB2bSBkaXNr
cz88L2Rpdj4NCjxkaXY+QnkgdGhlIHdheSwgSSBjYW4ndCBzdG9wIHRoZSB2bXMgdGhleSBoYXZl
IHRvIGJlIG9ubGluZSBkdXJpbmcgdGhlIG1pZ3JhdGlvbi4uPC9kaXY+DQo8ZGl2Pjxicj4NCjwv
ZGl2Pg0KPGRpdj5SZWdhcmRzPC9kaXY+DQo8ZGl2PkNocmlzdGlhbjwvZGl2Pg0KPC9kaXY+DQo8
L2Rpdj4NCjwvYmxvY2txdW90ZT4NCjwvZGl2Pg0KPGJyPg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2
Pg0KVG8gdmlldyB0aGUgdGVybXMgdW5kZXIgd2hpY2ggdGhpcyBlbWFpbCBpcyBkaXN0cmlidXRl
ZCwgcGxlYXNlIGdvIHRvOi0gPGJyPg0KPGEgaHJlZj0iaHR0cDovL2Rpc2NsYWltZXIubGVlZHNi
ZWNrZXR0LmFjLnVrL2Rpc2NsYWltZXIvZGlzY2xhaW1lci5odG1sIiB0YXJnZXQ9Il9ibGFuayI+
aHR0cDovL2Rpc2NsYWltZXIubGVlZHNiZWNrZXR0LmFjLnVrL2Rpc2NsYWltZXIvZGlzY2xhaW1l
ci5odG1sPC9hPg0KPHA+PC9wPg0KPC9ib2R5Pg0KPC9odG1sPg0K
--_000_bb226a5b0a6c4de4a9d19f2ae7c8e530emailandroidcom_--
7 years, 8 months
[Python SDK] Setup Host network with IP address
by TranceWorldLogic .
Hi,
I am trying to setup network via python sdk as shown below:
myhost_service.setup_networks( modified_network_attachments = [
types.NetworkAttachment(
host = myhost,
host_nic =
types.HostNic(name="eth0"),
network =
types.Network(name="Hello"),
ip_address_assignments = [
types.IpAddressAssignment(
assignment_method =
types.BootProtocol.STATIC,
ip = types.Ip(
address="192.168.300.10",
netmask="255.255.255.0",
version=types.IpVersion.V4,
),
)
],
),
] )
But I am getting below error:
"ovirtsdk4.Error: Fault reason is "Operation Failed". Fault detail is "[Bad
format of IPv4 address]". HTTP response code is 400."
Please help me.
Ovirt release version = 4.0
Thanks,
~Rohit
7 years, 8 months
strange behavior of ovirt-node ng update
by Sergey Kulikov
I have one Ovirt Node in my test cluster(others are centos), I'm observing strange behavior of update
checker in engine.
In options I can see:
engine=# select * from vdc_options where option_name='OvirtNodePackageNamesForCheckUpdate';
option_id | option_name | option_value | version
-----------+-------------------------------------+----------------------------+---------
124 | OvirtNodePackageNamesForCheckUpdate | ovirt-node-ng-image-update | general
(1 row)
so it tries to check for updated version of ovirt-node-ng-image-update, but
there is no ovirt-node-ng-image-update package installed inside updated node image, so engine always shows
available updates on this node to the same version:
> Check for available updates on host XXX was completed successfully with message 'found updates for packages ovirt-node-ng-image-4.1.1-1.el7.centos, ovirt-node-ng-image-update-4.1.1-1.el7.centos'.
I saw a bug in bugzilla, that it should be fixed in 4.0
Initially engine was set up as 4.x version, after that updated between releases and finally to 4.1.0,
and today to 4.1.1.
I saw this behavior on 4.1.0 node and also on 4.1.1, on this page:
http://www.ovirt.org/node/4.0/update/
Maybe my OvirtNodePackageNamesForCheckUpdate was changed between releases, but engine-setup left it untouched
and i should change it manually? If so, what it should look like?
--
7 years, 8 months
ovirt-shell backup
by Marcin Kruk
Hello is it possible to execute below actions from the ovirth-shell:
1. Make snapshot
( ovirt-shell -E 'add snapshot --parent-vm-name <vmname> --description
<desc> )
2. Make clone from snaphot above
( ? )
3. Export clone
( ? )
7 years, 8 months